46 resultados para Tempered MCMC
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.
Resumo:
The aim of this thesis has been to map the ethical journey of experienced nurses now practising in rural and remote hospitals in central and south-west Queensland and in domiciliary services in Brisbane. One group of the experienced nurses in the study were Directors of Nursing in rural and remote hospitals. These nurses were “hands on”, “multi-skilled “ nurses who also had the task of managing the hospital. Also there were two Directors of Nursing from domiciliary services in Brisbane. A grounded theory method was used. The nurses were interviewed and the data retrieved from the interviews was coded, categorised and from these categories a conceptual framework was generated. The literature which dealt with the subject of ethical decision making and nurses also became part of the data. The study revealed that all these nurses experienced moral distress as they made ethical decisions. The decision making categories revealed in the data were: the area of financial management; issues as end of life approaches; allowing to die with dignity; emergency decisions; experience of unexpected death; the dilemma of providing care in very difficult circumstances. These categories were divided into two chapters: the category related to administrative and financial constraints and categories dealing with ethical issues in clinical settings. A further chapter discussed the overarching category of coping with moral distress. These experienced nurses suffered moral distress as they made ethical decisions, confirming many instances of moral distress in ethical decision making documented in the literature to date. Significantly, the nurses in their interviews never mentioned the ethical principles used in bioethics as an influence in their decision making. Only one referred to lectures on ethics as being an influence in her thinking. As they described their ethical problems and how they worked through them, they drew on their own previous experience rather than any knowledge of ethics gained from nursing education. They were concerned for their patients, they spoke from a caring responsibility towards their patients, but they were also concerned for justice for their patients. This study demonstrates that these nurses operated from the ethic of care, tempered with the ethic of responsibility as well as a concern for justice for their patients. Reflection on professional experience, rather than formal ethics education and training, was the primary influence on their ethical decision making.
Resumo:
This article explores the use of probabilistic classification, namely finite mixture modelling, for identification of complex disease phenotypes, given cross-sectional data. In particular, if focuses on posterior probabilities of subgroup membership, a standard output of finite mixture modelling, and how the quantification of uncertainty in these probabilities can lead to more detailed analyses. Using a Bayesian approach, we describe two practical uses of this uncertainty: (i) as a means of describing a person’s membership to a single or multiple latent subgroups and (ii) as a means of describing identified subgroups by patient-centred covariates not included in model estimation. These proposed uses are demonstrated on a case study in Parkinson’s disease (PD), where latent subgroups are identified using multiple symptoms from the Unified Parkinson’s Disease Rating Scale (UPDRS).
Resumo:
Genetic research of complex diseases is a challenging, but exciting, area of research. The early development of the research was limited, however, until the completion of the Human Genome and HapMap projects, along with the reduction in the cost of genotyping, which paves the way for understanding the genetic composition of complex diseases. In this thesis, we focus on the statistical methods for two aspects of genetic research: phenotype definition for diseases with complex etiology and methods for identifying potentially associated Single Nucleotide Polymorphisms (SNPs) and SNP-SNP interactions. With regard to phenotype definition for diseases with complex etiology, we firstly investigated the effects of different statistical phenotyping approaches on the subsequent analysis. In light of the findings, and the difficulties in validating the estimated phenotype, we proposed two different methods for reconciling phenotypes of different models using Bayesian model averaging as a coherent mechanism for accounting for model uncertainty. In the second part of the thesis, the focus is turned to the methods for identifying associated SNPs and SNP interactions. We review the use of Bayesian logistic regression with variable selection for SNP identification and extended the model for detecting the interaction effects for population based case-control studies. In this part of study, we also develop a machine learning algorithm to cope with the large scale data analysis, namely modified Logic Regression with Genetic Program (MLR-GEP), which is then compared with the Bayesian model, Random Forests and other variants of logic regression.
Resumo:
Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.
Resumo:
The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.
Resumo:
This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.
Resumo:
A time series method for the determination of combustion chamber resonant frequencies is outlined. This technique employs the use of Markov-chain Monte Carlo (MCMC) to infer parameters in a chosen model of the data. The development of the model is included and the resonant frequency is characterised as a function of time. Potential applications for cycle-by-cycle analysis are discussed and the bulk temperature of the gas and the trapped mass in the combustion chamber are evaluated as a function of time from resonant frequency information.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
The 'dick' tog, a briefs-style male swimsuit as it is colloquially referred to, is linked to Australia's national identity with overtly masculine bronzed 'Aussie' bodies clothed in this iconic apparel. However, the reality is, our hunger for worshiping the sun and the addiction to a beach culture is tempered by the pragmatic need to cover up and wear neck-to-knee, or more apt, head-to-toe sun protective clothing. Australia, in particular the state of Queensland, has one of the highest rates of skin cancer in the world; nevertheless, even after wide-ranging public programs for sun safety awareness many people still continue to wear designs that provide minimal sun protection. This paper will examine issues surrounding fashion and sun safe clothing. It will be proposed that in order to have effective community adoption of sun safe practices it is critical to understand the important role that fashion plays in determining sun protective behaviour.
Resumo:
The great male Aussie cossie is growing spots. The ‘dick’ tog, as it is colloquially referred to, is linked to Australia’s national identify with overtly masculine bronzed Aussie bodies clothed in this iconic apparel. Yet the reality is our hunger for worshiping the sun and the addiction to a beach lifestyle is tempered by the pragmatic need for neck-to-knee, or more apt head-to-toe, swimwear. Spotty Dick is an irreverent play on male swimwear – it experiments with alternate modes to sheath the body with Lyrca in order to protect it from searing UV’s and at the same time light-heartedly fools around with texture and pattern; to be specific, black Scharovsky crystals, jewelled in spot patterns - jewelled clothing is not characteristically aligned to menswear and even less so to the great Aussie cossie. The crystals form a matrix of spots that attempt to provoke a sense of mischievousness aligned to the Aussie beach larrikin. Ironically, spot patterns are in itself a form of a parody, as prolonged sun exposure ages the skin and sun spots can occur if appropriate sun protection is not used. ‘Spotty Dick’ – a research experiment to test design suitability for the use of jewelled spot matrix patterns for UV aware men’s swimwear. The creative work was paraded at 56 shows, over a 2 week period, and an estimated 50,000 people viewed the work.
Resumo:
A century ago, as the Western world embarked on a period of traumatic change, the visual realism of photography and documentary film brought print and radio news to life. The vision that these new mediums threw into stark relief was one of intense social and political upheaval: the birth of modernity fired and tempered in the crucible of the Great War. As millions died in this fiery chamber and the influenza pandemic that followed, lines of empires staggered to their fall, and new geo-political boundaries were scored in the raw, red flesh of Europe. The decade of 1910 to 1919 also heralded a prolific period of artistic experimentation. It marked the beginning of the social and artistic age of modernity and, with it, the nascent beginnings of a new art form: film. We still live in the shadow of this violent, traumatic and fertile age; haunted by the ghosts of Flanders and Gallipoli and its ripples of innovation and creativity. Something happened here, but to understand how and why is not easy; for the documentary images we carry with us in our collective cultural memory have become what Baudrillard refers to as simulacra. Detached from their referents, they have become referents themselves, to underscore other, grand narratives in television and Hollywood films. The personal histories of the individuals they represent so graphically–and their hope, love and loss–are folded into a national story that serves, like war memorials and national holidays, to buttress social myths and values. And, as filmic images cross-pollinate, with each iteration offering a new catharsis, events that must have been terrifying or wondrous are abstracted. In this paper we first discuss this transformation through reference to theories of documentary and memory–this will form a conceptual framework for a subsequent discussion of the short film Anmer. Produced by the first author in 2010, Anmer is a visual essay on documentary, simulacra and the symbolic narratives of history. Its form, structure and aesthetic speak of the confluence of documentary, history, memory and dream. Located in the first decade of the twentieth century, its non-linear narratives of personal tragedy and poetic dreamscapes are an evocative reminder of the distance between intimate experience, grand narratives, and the mythologies of popular films. This transformation of documentary sources not only played out in the processes of the film’s production, but also came to form its theme.
Resumo:
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.
Resumo:
This paper describes a generalised linear mixed model (GLMM) approach for understanding spatial patterns of participation in population health screening, in the presence of multiple screening facilities. The models presented have dual focus, namely the prediction of expected patient flows from regions to services and relative rates of participation by region- service combination, with both outputs having meaningful implications for the monitoring of current service uptake and provision. The novelty of this paper lies with the former focus, and an approach for distributing expected participation by region based on proximity to services is proposed. The modelling of relative rates of participation is achieved through the combination of different random effects, as a means of assigning excess participation to different sources. The methodology is applied to participation data collected from a government-funded mammography program in Brisbane, Australia.