844 resultados para Failure time data analysis
Resumo:
The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.
Resumo:
One approach to microbial genotyping is to make use of sets of single-nucleotide polymorphisms (SNPs) in combination with binary markers. Here we report the modification and automation of a SNP-plus-binary-marker-based approach to the genotyping of Staphylococcus aureus and its application to 391 S. aureus isolates from southeast Queensland, Australia. The SNPs used were arcC210, tpi243, arcC162, gmk318, pta294, tpi36, tpi241, and pta383. These provide a Simpson's index of diversity (D) of 0.95 with respect to the S. aureus multilocus sequence typing database and define 61 genotypes and the major clonal complexes. The binary markers used were pvl, cna, sdrE, pT181, and pUB110. Two novel real-time PCR formats for interrogating these markers were compared. One of these makes use of light upon extension (LUX) primers and biplexed reactions, while the other is a streamlined modification of kinetic PCR using SYBR green. The latter format proved to be more robust. In addition, automated methods for DNA template preparation, reaction setup, and data analysis were developed. A single SNP-based method for ST-93 (Queensland clone) identification was also devised. The genotyping revealed the numerical importance of the South West Pacific and Queensland community-acquired methicillin-resistant S. aureus (MRSA) clones and the clonal complex 239 Aus-1/Aus-2 hospital-associated MRSA. There was a strong association between the community-acquired clones and pvl.
Resumo:
BACKGROUND: Intervention time series analysis (ITSA) is an important method for analysing the effect of sudden events on time series data. ITSA methods are quasi-experimental in nature and the validity of modelling with these methods depends upon assumptions about the timing of the intervention and the response of the process to it. METHOD: This paper describes how to apply ITSA to analyse the impact of unplanned events on time series when the timing of the event is not accurately known, and so the problems of ITSA methods are magnified by uncertainty in the point of onset of the unplanned intervention. RESULTS: The methods are illustrated using the example of the Australian Heroin Shortage of 2001, which provided an opportunity to study the health and social consequences of an abrupt change in heroin availability in an environment of widespread harm reduction measures. CONCLUSION: Application of these methods enables valuable insights about the consequences of unplanned and poorly identified interventions while minimising the risk of spurious results.
Resumo:
Objective: It is usual that data collected from routine clinical care is sparse and unable to support the more complex pharmacokinetic (PK) models that may have been reported in previous rich data studies. Informative priors may be a pre-requisite for model development. The aim of this study was to estimate the population PK parameters of sirolimus using a fully Bayesian approach with informative priors. Methods: Informative priors including prior mean and precision of the prior mean were elicited from previous published studies using a meta-analytic technique. Precision of between-subject variability was determined by simulations from a Wishart distribution using MATLAB (version 6.5). Concentration-time data of sirolimus retrospectively collected from kidney transplant patients were analysed using WinBUGS (version 1.3). The candidate models were either one- or two-compartment with first order absorption and first order elimination. Model discrimination was based on computation of the posterior odds supporting the model. Results: A total of 315 concentration-time points were obtained from 25 patients. Most data were clustered at trough concentrations with range of 1.6 to 77 hours post-dose. Using informative priors, either a one- or two-compartment model could be used to describe the data. When a one-compartment model was applied, information was gained from the data for the value of apparent clearance (CL/F = 18.5 L/h), and apparent volume of distribution (V/F = 1406 L) but no information was gained about the absorption rate constant (ka). When a two-compartment model was fitted to the data, the data were informative about CL/F, apparent inter-compartmental clearance, and apparent volume of distribution of the peripheral compartment (13.2 L/h, 20.8 L/h, and 579 L, respectively). The posterior distribution of the volume distribution of central compartment and ka were the same as priors. The posterior odds for the two-compartment model was 8.1, indicating the data supported the two-compartment model. Conclusion: The use of informative priors supported the choice of a more complex and informative model that would otherwise have not been supported by the sparse data.
Resumo:
This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed. Copyright (C) 2000 The College of Optometrists.
Resumo:
This multi-modal investigation aimed to refine analytic tools including proton magnetic resonance spectroscopy (1H-MRS) and fatty acid gas chromatography-mass spectrometry (GC-MS) analysis, for use with adult and paediatric populations, to investigate potential biochemical underpinnings of cognition (Chapter 1). Essential fatty acids (EFAs) are vital for the normal development and function of neural cells. There is increasing evidence of behavioural impairments arising from dietary deprivation of EFAs and their long-chain fatty acid metabolites (Chapter 2). Paediatric liver disease was used as a deficiency model to examine the relationships between EFA status and cognitive outcomes. Age-appropriate Wechsler assessments measured Full-scale IQ (FSIQ) and Information Processing Speed (IPS) in clinical and healthy cohorts; GC-MS quantified surrogate markers of EFA status in erythrocyte membranes; and 1H-MRS quantified neurometabolite markers of neuronal viability and function in cortical tissue (Chapter 3). Post-transplant children with early-onset liver disease demonstrated specific deficits in IPS compared to age-matched acute liver failure transplant patients and sibling controls, suggesting that the time-course of the illness is a key factor (Chapter 4). No signs of EFA deficiency were observed in the clinical cohort, suggesting that EFA metabolism was not significantly impacted by liver disease. A strong, negative correlation was observed between omega-6 fatty acids and FSIQ, independent of disease diagnosis (Chapter 5). In a study of healthy adults, effect sizes for the relationship between 1H-MRS- detectable neurometabolites and cognition fell within the range of previous work, but were not statistically significant. Based on these findings, recommendations are made emphasising the need for hypothesis-driven enquiry and greater subtlety of data analysis (Chapter 6). Consistency of metabolite values between paediatric clinical cohorts and controls indicate normal neurodevelopment, but the lack of normative, age-matched data makes it difficult to assess the true strength of liver disease-associated metabolite changes (Chapter 7). Converging methods offer a challenging but promising and novel approach to exploring brain-behaviour relationships from micro- to macroscopic levels of analysis (Chapter 8).
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.
Resumo:
This thesis seeks to describe the development of an inexpensive and efficient clustering technique for multivariate data analysis. The technique starts from a multivariate data matrix and ends with graphical representation of the data and pattern recognition discriminant function. The technique also results in distances frequency distribution that might be useful in detecting clustering in the data or for the estimation of parameters useful in the discrimination between the different populations in the data. The technique can also be used in feature selection. The technique is essentially for the discovery of data structure by revealing the component parts of the data. lhe thesis offers three distinct contributions for cluster analysis and pattern recognition techniques. The first contribution is the introduction of transformation function in the technique of nonlinear mapping. The second contribution is the us~ of distances frequency distribution instead of distances time-sequence in nonlinear mapping, The third contribution is the formulation of a new generalised and normalised error function together with its optimal step size formula for gradient method minimisation. The thesis consists of five chapters. The first chapter is the introduction. The second chapter describes multidimensional scaling as an origin of nonlinear mapping technique. The third chapter describes the first developing step in the technique of nonlinear mapping that is the introduction of "transformation function". The fourth chapter describes the second developing step of the nonlinear mapping technique. This is the use of distances frequency distribution instead of distances time-sequence. The chapter also includes the new generalised and normalised error function formulation. Finally, the fifth chapter, the conclusion, evaluates all developments and proposes a new program. for cluster analysis and pattern recognition by integrating all the new features.
Resumo:
Recent advances in technology have produced a significant increase in the availability of free sensor data over the Internet. With affordable weather monitoring stations now available to individual meteorology enthusiasts a reservoir of real time data such as temperature, rainfall and wind speed can now be obtained for most of the United States and Europe. Despite the abundance of available data, obtaining useable information about the weather in your local neighbourhood requires complex processing that poses several challenges. This paper discusses a collection of technologies and applications that harvest, refine and process this data, culminating in information that has been tailored toward the user. In this case we are particularly interested in allowing a user to make direct queries about the weather at any location, even when this is not directly instrumented, using interpolation methods. We also consider how the uncertainty that the interpolation introduces can then be communicated to the user of the system, using UncertML, a developing standard for uncertainty representation.
Resumo:
The use of quantitative methods has become increasingly important in the study of neuropathology and especially in neurodegenerative disease. Disorders such as Alzheimer's disease (AD) and the frontotemporal dementias (FTD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This chapter reviews the advantages and limitations of the different methods of quantifying pathological lesions in histological sections including estimates of density, frequency, coverage, and the use of semi-quantitative scores. The sampling strategies by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are described. In addition, data analysis methods commonly used to analysis quantitative data in neuropathology, including analysis of variance (ANOVA), polynomial curve fitting, multiple regression, classification trees, and principal components analysis (PCA), are discussed. These methods are illustrated with reference to quantitative studies of a variety of neurodegenerative disorders.
Resumo:
INTRODUCTION: Bipolar disorder requires long-term treatment but non-adherence is a common problem. Antipsychotic long-acting injections (LAIs) have been suggested to improve adherence but none are licensed in the UK for bipolar. However, the use of second-generation antipsychotics (SGA) LAIs in bipolar is not uncommon albeit there is a lack of systematic review in this area. This study aims to systematically review safety and efficacy of SGA LAIs in the maintenance treatment of bipolar disorder. METHODS AND ANALYSIS: The protocol is based on Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) and will include only randomised controlled trials comparing SGA LAIs in bipolar. PubMed, EMBASE, CINAHL, Cochrane Library (CENTRAL), PsychINFO, LiLACS, http://www.clinicaltrials.gov will be searched, with no language restriction, from 2000 to January 2016 as first SGA LAIs came to the market after 2000. Manufacturers of SGA LAIs will also be contacted. Primary efficacy outcome is relapse rate or delayed time to relapse or reduction in hospitalisation and primary safety outcomes are drop-out rates, all-cause discontinuation and discontinuation due to adverse events. Qualitative reporting of evidence will be based on 21 items listed on standards for reporting qualitative research (SRQR) focusing on study quality (assessed using the Jadad score, allocation concealment and data analysis), risk of bias and effect size. Publication bias will be assessed using funnel plots. If sufficient data are available meta-analysis will be performed with primary effect size as relative risk presented with 95% CI. Sensitivity analysis, conditional on number of studies and sample size, will be carried out on manic versus depressive symptoms and monotherapy versus adjunctive therapy.
Resumo:
If humans monitor streams of rapidly presented (approximately 100-ms intervals) visual stimuli, which are typically specific single letters of the alphabet, for two targets (T1 and T2), they often miss T2 if it follows T1 within an interval of 200-500 ms. If T2 follows T1 directly (within 100 ms; described as occurring at 'Lag 1'), however, performance is often excellent: the so-called 'Lag-1 sparing' phenomenon. Lag-1 sparing might result from the integration of the two targets into the same 'event representation', which fits with the observation that sparing is often accompanied by a loss of T1-T2 order information. Alternatively, this might point to competition between the two targets (implying a trade-off between performance on T1 and T2) and Lag-1 sparing might solely emerge from conditional data analysis (i.e. T2 performance given T1 correct). We investigated the neural correlates of Lag-1 sparing by carrying out magnetoencephalography (MEG) recordings during an attentional blink (AB) task, by presenting two targets with a temporal lag of either 1 or 2 and, in the case of Lag 2, with a nontarget or a blank intervening between T1 and T2. In contrast to Lag 2, where two distinct neural responses were observed, at Lag 1 the two targets produced one common neural response in the left temporo-parieto-frontal (TPF) area but not in the right TPF or prefrontal areas. We discuss the implications of this result with respect to competition and integration hypotheses, and with respect to the different functional roles of the cortical areas considered. We suggest that more than one target can be identified in parallel in left TPF, at least in the absence of intervening nontarget information (i.e. masks), yet identified targets are processed and consolidated as two separate events by other cortical areas (right TPF and PFC, respectively).
Resumo:
* The work is supported by RFBR, grant 04-01-00858-a
Resumo:
Diabetes patients might suffer from an unhealthy life, long-term treatment and chronic complicated diseases. The decreasing hospitalization rate is a crucial problem for health care centers. This study combines the bagging method with base classifier decision tree and costs-sensitive analysis for diabetes patients' classification purpose. Real patients' data collected from a regional hospital in Thailand were analyzed. The relevance factors were selected and used to construct base classifier decision tree models to classify diabetes and non-diabetes patients. The bagging method was then applied to improve accuracy. Finally, asymmetric classification cost matrices were used to give more alternative models for diabetes data analysis.