903 resultados para Bayesian inference on precipitation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of how to efficiently and safely design dose finding studies. Both current and novel utility functions are explored using Bayesian adaptive design methodology for the estimation of a maximum tolerated dose (MTD). In particular, we explore widely adopted approaches such as the continual reassessment method and minimizing the variance of the estimate of an MTD. New utility functions are constructed in the Bayesian framework and are evaluated against current approaches. To reduce computing time, importance sampling is implemented to re-weight posterior samples thus avoiding the need to draw samples using Markov chain Monte Carlo techniques. Further, as such studies are generally first-in-man, the safety of patients is paramount. We therefore explore methods for the incorporation of safety considerations into utility functions to ensure that only safe and well-predicted doses are administered. The amalgamation of Bayesian methodology, adaptive design and compound utility functions is termed adaptive Bayesian compound design (ABCD). The performance of this amalgamation of methodology is investigated via the simulation of dose finding studies. The paper concludes with a discussion of results and extensions that could be included into our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review the literature on the combined effect of asbestos exposure and smoking on lung cancer, and explore a Bayesian approach to assess evidence of interaction. Previous approaches have focussed on separate tests for an additive or multiplicative relation. We extend these approaches by exploring the strength of evidence for either relation using approaches which allow the data to choose between both models. We then compare the different approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic research of complex diseases is a challenging, but exciting, area of research. The early development of the research was limited, however, until the completion of the Human Genome and HapMap projects, along with the reduction in the cost of genotyping, which paves the way for understanding the genetic composition of complex diseases. In this thesis, we focus on the statistical methods for two aspects of genetic research: phenotype definition for diseases with complex etiology and methods for identifying potentially associated Single Nucleotide Polymorphisms (SNPs) and SNP-SNP interactions. With regard to phenotype definition for diseases with complex etiology, we firstly investigated the effects of different statistical phenotyping approaches on the subsequent analysis. In light of the findings, and the difficulties in validating the estimated phenotype, we proposed two different methods for reconciling phenotypes of different models using Bayesian model averaging as a coherent mechanism for accounting for model uncertainty. In the second part of the thesis, the focus is turned to the methods for identifying associated SNPs and SNP interactions. We review the use of Bayesian logistic regression with variable selection for SNP identification and extended the model for detecting the interaction effects for population based case-control studies. In this part of study, we also develop a machine learning algorithm to cope with the large scale data analysis, namely modified Logic Regression with Genetic Program (MLR-GEP), which is then compared with the Bayesian model, Random Forests and other variants of logic regression.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Upper Roper River is one of the Australia’s unique tropical rivers which have been largely untouched by development. The Upper Roper River catchment comprises the sub-catchments of the Waterhouse River and Roper Creek, the two tributaries of the Roper River. There is a complex geological setting with different aquifer types. In this seasonal system, close interaction between surface water and groundwater contributes to both streamflow and sustaining ecosystems. The interaction is highly variable between seasons. A conceptual hydrogeological model was developed to investigate the different hydrological processes and geochemical parameters, and determine the baseline characteristics of water resources of this pristine catchment. In the catchment, long term average rainfall is around 850 mm and is summer dominant which significantly influences the total hydrological system. The difference between seasons is pronounced, with high rainfall up to 600 mm/month in the wet season, and negligible rainfall in the dry season. Canopy interception significantly reduces the amount of effective rainfall because of the native vegetation cover in the pristine catchment. Evaporation exceeds rainfall the majority of the year. Due to elevated evaporation and high temperature in the tropics, at least 600 mm of annual rainfall is required to generate potential recharge. Analysis of 120 years of rainfall data trend helped define “wet” and “dry periods”: decreasing trend corresponds to dry periods, and increasing trend to wet periods. The period from 1900 to 1970 was considered as Dry period 1, when there were years with no effective rainfall, and if there was, the intensity of rainfall was around 300 mm. The period 1970 – 1985 was identified as the Wet period 2, when positive effective rainfall occurred in almost every year, and the intensity reached up to 700 mm. The period 1985 – 1995 was the Dry period 2, with similar characteristics as Dry period 1. Finally, the last decade was the Wet period 2, with effective rainfall intensity up to 800 mm. This variability in rainfall over decades increased/decreased recharge and discharge, improving/reducing surface water and groundwater quantity and quality in different wet and dry periods. The stream discharge follows the rainfall pattern. In the wet season, the aquifer is replenished, groundwater levels and groundwater discharge are high, and surface runoff is the dominant component of streamflow. Waterhouse River contributes two thirds and Roper Creek one third to Roper River flow. As the dry season progresses, surface runoff depletes, and groundwater becomes the main component of stream flow. Flow in Waterhouse River is negligible, the Roper Creek dries up, but the Roper River maintains its flow throughout the year. This is due to the groundwater and spring discharge from the highly permeable Tindall Limestone and tufa aquifers. Rainfall seasonality and lithology of both the catchment and aquifers are shown to influence water chemistry. In the wet season, dilution of water bodies by rainwater is the main process. In the dry season, when groundwater provides baseflow to the streams, their chemical composition reflects lithology of the aquifers, in particular the karstic areas. Water chemistry distinguishes four types of aquifer materials described as alluvium, sandstone, limestone and tufa. Surface water in the headwaters of the Waterhouse River, the Roper Creek and their tributaries are freshwater, and reflect the alluvium and sandstone aquifers. At and downstream of the confluence of the Roper River, river water chemistry indicates the influence of rainfall dilution in the wet season, and the signature of the Tindall Limestone and tufa aquifers in the dry. Rainbow Spring on the Waterhouse River and Bitter Spring on the Little Roper River (known as Roper Creek at the headwaters) discharge from the Tindall Limestone. Botanic Walk Spring and Fig Tree Spring discharge into the Roper River from tufa. The source of water was defined based on water chemical composition of the springs, surface and groundwater. The mechanisms controlling surface water chemistry were examined to define the dominance of precipitation, evaporation or rock weathering on the water chemical composition. Simple water balance models for the catchment have been developed. The important aspects to be considered in water resource planning of this total system are the naturally high salinity in the region, especially the downstream sections, and how unpredictable climate variation may impact on the natural seasonal variability of water volumes and surface-subsurface interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exist where the more mind changes the learner is willing to accept, the less the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Axon guidance by molecular gradients plays a crucial role in wiring up the nervous system. However, the mechanisms axons use to detect gradients are largely unknown. We first develop a Bayesian “ideal observer” analysis of gradient detection by axons, based on the hypothesis that a principal constraint on gradient detection is intrinsic receptor binding noise. Second, from this model, we derive an equation predicting how the degree of response of an axon to a gradient should vary with gradient steepness and absolute concentration. Third, we confirm this prediction quantitatively by performing the first systematic experimental analysis of how axonal response varies with both these quantities. These experiments demonstrate a degree of sensitivity much higher than previously reported for any chemotacting system. Together, these results reveal both the quantitative constraints that must be satisfied for effective axonal guidance and the computational principles that may be used by the underlying signal transduction pathways, and allow predictions for the degree of response of axons to gradients in a wide variety of in vivo and in vitro settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A time series method for the determination of combustion chamber resonant frequencies is outlined. This technique employs the use of Markov-chain Monte Carlo (MCMC) to infer parameters in a chosen model of the data. The development of the model is included and the resonant frequency is characterised as a function of time. Potential applications for cycle-by-cycle analysis are discussed and the bulk temperature of the gas and the trapped mass in the combustion chamber are evaluated as a function of time from resonant frequency information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soluble organic matter derived from exotic Pinus vegetation forms stronger complexes with iron (Fe) than the soluble organic matter derived from most native Australian species. This has lead to concern about the environmental impacts related to the establishment of extensive exotic Pinus plantations in coastal southeast Queensland, Australia. It has been suggested that the Pinus plantations may enhance the solubility of Fe in soils by increasing the amount of organically complexed Fe. While this remains inconclusive, the environmental impacts of an increased flux of dissolved, organically complexed Fe from soils to the fluvial system and then to sensitive coastal ecosystems are potentially damaging. Previous work investigated a small number of samples, was largely laboratory based and had limited application to field conditions. These assessments lacked field-based studies, including the comparison of the soil water chemistry of sites associated with Pinus vegetation and undisturbed native vegetation. In addition, the main controls on the distribution and mobilisation of Fe in soils of this subtropical coastal region have not been determined. This information is required in order to better understand the relative significance of any Pinus enhanced solubility of Fe. The main aim of this thesis is to determine the controls on Fe distribution and mobilisation in soils and soil waters of a representative coastal catchment in southeast Queensland (Poona Creek catchment, Fraser Coast) and to test the effect of Pinus vegetation on the solubility and speciation of Fe. The thesis is structured around three individual papers. The first paper identifies the main processes responsible for the distribution and mobilisation of labile Fe in the study area and takes a catchment scale approach. Physicochemical attributes of 120 soil samples distributed throughout the catchment are analysed, and a new multivariate data analysis approach (Kohonen’s self organising maps) is used to identify the conditions associated with high labile Fe. The second paper establishes whether Fe nodules play a major role as an iron source in the catchment, by determining the genetic mechanism responsible for their formation. The nodules are a major pool of Fe in much of the region and previous studies have implied that they may be involved in redox-controlled mobilisation and redistribution of Fe. This is achieved by combining a detailed study of a ferric soil profile (morphology, mineralogy and micromorphology) with the distribution of Fe nodules on a catchment scale. The third component of the thesis tests whether the concentration and speciation of Fe in soil solutions from Pinus plantations differs significantly from native vegetation soil solutions. Microlysimeters are employed to collect unaltered, in situ soil water samples. The redox speciation of Fe is determined spectrophotometrically and the interaction between Fe and dissolved organic matter (DOM) is modelled with the Stockholm Humic Model. The thesis provides a better understanding of the controls on the distribution, concentration and speciation of Fe in the soils and soil waters of southeast Queensland. Reductive dissolution is the main mechanism by which mobilisation of Fe occurs in the study area. Labile Fe concentrations are low overall, particularly in the sandy soils of the coastal plain. However, high labile Fe is common in seasonally waterlogged and clay-rich soils which are exposed to fluctuating redox conditions and in organic-rich soils adjacent to streams. Clay-rich soils are most common in the upper parts of the catchment. Fe nodules were shown to have a negligible role in the redistribution of dissolved iron in the catchment. They are formed by the erosion, colluvial transport and chemical weathering of iron-rich sandstones. The ferric horizons, in which nodules are commonly concentrated, subsequently form through differential biological mixing of the soil. Whereas dissolution/ reprecipitation of the Fe cements is an important component of nodule formation, mobilised Fe reprecipitates locally. Dissolved Fe in the soil waters is almost entirely in the ferrous form. Vegetation type does not affect the concentration and speciation of Fe in soil waters, although Pinus DOM has greater acidic functional group site densities than DOM from native vegetation. Iron concentrations are highest in the high DOM soil waters collected from sandy podosols, where they are controlled by redox potential. Iron concentrations are low in soil solutions from clay and iron oxide rich soils, in spite of similar redox potentials. This is related to stronger sorption to the reactive clay and iron oxide mineral surfaces in these soils, which reduces the amount of DOM available for microbial metabolisation and reductive dissolution of Fe. Modelling suggests that Pinus DOM can significantly increase the amount of truly dissolved ferric iron remaining in solution in oxidising conditions. Thus, inputs of ferrous iron together with Pinus DOM to surface waters may reduce precipitation of hydrous ferric oxides and increase the flux of dissolved iron out of the catchment. Such inputs are most likely from the lower catchment, where podosols planted with Pinus are most widely distributed. Significant outcomes other than the main aims were also achieved. It is shown that mobilisation of Fe in podosols can occur as dissolved Fe(II) rather than as Fe(III)-organic complexes. This has implications for the large body of work which assumes that Fe(II) plays a minor role. Also, the first paper demonstrates that a data analysis approach based on Kohonen’s self organising maps can facilitate the interpretation of complex datasets and can help identify geochemical processes operating on a catchment scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For more than a decade research in the field of context aware computing has aimed to find ways to exploit situational information that can be detected by mobile computing and sensor technologies. The goal is to provide people with new and improved applications, enhanced functionality and better use experience (Dey, 2001). Early applications focused on representing or computing on physical parameters, such as showing your location and the location of people or things around you. Such applications might show where the next bus is, which of your friends is in the vicinity and so on. With the advent of social networking software and microblogging sites such as Facebook and Twitter, recommender systems and so on context-aware computing is moving towards mining the social web in order to provide better representations and understanding of context, including social context. In this paper we begin by recapping different theoretical framings of context. We then discuss the problem of context- aware computing from a design perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the population pharmacokinetics of an acepromazine (ACP) metabolite (2-(1-hydroxyethyl)promazine) (HEPS) in horses for the estimation of likely detection times in plasma and urine. Acepromazine (30 mg) was administered to 12 horses, and blood and urine samples were taken at frequent intervals for chemical analysis. A Bayesian hierarchical model was fitted to describe concentration-time data and cumulative urine amounts for HEPS. The metabolite HEPS was modelled separately from the parent ACP as the half-life of the parent was considerably less than that of the metabolite. The clearance ($Cl/F_{PM}$) and volume of distribution ($V/F_{PM}$), scaled by the fraction of parent converted to metabolite, were estimated as 769 L/h and 6874 L, respectively. For a typical horse in the study, after receiving 30 mg of ACP, the upper limit of the detection time was 35 hours in plasma and 100 hours in urine, assuming an arbitrary limit of detection of 1 $\mu$g/L, and a small ($\approx 0.01$) probability of detection. The model derived allowed the probability of detection to be estimated at the population level. This analysis was conducted on data collected from only 12 horses, but we assume that this is representative of the wider population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrete Markov random field models provide a natural framework for representing images or spatial datasets. They model the spatial association present while providing a convenient Markovian dependency structure and strong edge-preservation properties. However, parameter estimation for discrete Markov random field models is difficult due to the complex form of the associated normalizing constant for the likelihood function. For large lattices, the reduced dependence approximation to the normalizing constant is based on the concept of performing computationally efficient and feasible forward recursions on smaller sublattices which are then suitably combined to estimate the constant for the whole lattice. We present an efficient computational extension of the forward recursion approach for the autologistic model to lattices that have an irregularly shaped boundary and which may contain regions with no data; these lattices are typical in applications. Consequently, we also extend the reduced dependence approximation to these scenarios enabling us to implement a practical and efficient non-simulation based approach for spatial data analysis within the variational Bayesian framework. The methodology is illustrated through application to simulated data and example images. The supplemental materials include our C++ source code for computing the approximate normalizing constant and simulation studies.