10 resultados para Longitudinal Data Analysis and Time Series

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study explores whether the introduction of selectively trained radiographers reporting Accident and Emergency (A&E) X-ray examinations or the appendicular skeleton affected the availability of reports for A&E and General Practitioner (GP) examinations at it typical district general hospital. This was achieved by analysing monthly data on A&E and GP examinations for 1993 1997 using structural time-series models. Parameters to capture stochastic seasonal effects and stochastic time trends were included ill the models. The main outcome measures were changes in the number, proportion and timeliness of A&E and GP examinations reported. Radiographer reporting X-ray examinations requested by A&E was associated with it 12% (p = 0.050) increase in the number of A&E examinations reported and it 37% (p

Relevância:

100.00% 100.00%

Publicador:

Resumo:

After ingestion of a standardized dose of ethanol, alcohol concentrations were assessed, over 3.5 hours from blood (six readings) and breath (10 readings) in a sample of 412 MZ and DZ twins who took part in an Alcohol Challenge Twin Study (ACTS). Nearly all participants were subsequently genotyped on two polymorphic SNPs in the ADH1B and ADH1C loci known to affect in vitro ADH activity. In the DZ pairs, 14 microsatellite markers covering a 20.5 cM region on chromosome 4 that includes the ADH gene family were assessed, Variation in the timed series of autocorrelated blood and breath alcohol readings was studied using a bivariate simplex design. The contribution of a quantitative trait locus (QTL) or QTL's linked to the ADH region was estimated via a mixture of likelihoods weighted by identity-by-descent probabilities. The effects of allelic substitution at the ADH1B and ADH1C loci were estimated in the means part of the model simultaneously with the effects sex and age. There was a major contribution to variance in alcohol metabolism due to a QTL which accounted for about 64% of the additive genetic covariation common to both blood and breath alcohol readings at the first time point. No effects of the ADH1B*47His or ADH1C*349Ile alleles on in vivo metabolism were observed, although these have been shown to have major effects in vitro. This implies that there is a major determinant of variation for in vivo alcohol metabolism in the ADH region that is not accounted for by these polymorphisms. Earlier analyses of these data suggested that alcohol metabolism is related to drinking behavior and imply that this QTL may be protective against alcohol dependence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Intervention time series analysis (ITSA) is an important method for analysing the effect of sudden events on time series data. ITSA methods are quasi-experimental in nature and the validity of modelling with these methods depends upon assumptions about the timing of the intervention and the response of the process to it. METHOD: This paper describes how to apply ITSA to analyse the impact of unplanned events on time series when the timing of the event is not accurately known, and so the problems of ITSA methods are magnified by uncertainty in the point of onset of the unplanned intervention. RESULTS: The methods are illustrated using the example of the Australian Heroin Shortage of 2001, which provided an opportunity to study the health and social consequences of an abrupt change in heroin availability in an environment of widespread harm reduction measures. CONCLUSION: Application of these methods enables valuable insights about the consequences of unplanned and poorly identified interventions while minimising the risk of spurious results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop an evolutionary kernel-based time update algorithm to recursively estimate subset discrete lag models (including fullorder models) with a forgetting factor and a constant term, using the exactwindowed case. The algorithm applies to causality detection when the true relationship occurs with a continuous or a random delay. We then demonstrate the use of the proposed evolutionary algorithm to study the monthly mutual fund data, which come from the 'CRSP Survivor-bias free US Mutual Fund Database'. The results show that the NAV is an influential player on the international stage of global bond and stock markets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate that the process of generating smooth transitions Call be viewed as a natural result of the filtering operations implied in the generation of discrete-time series observations from the sampling of data from an underlying continuous time process that has undergone a process of structural change. In order to focus discussion, we utilize the problem of estimating the location of abrupt shifts in some simple time series models. This approach will permit its to address salient issues relating to distortions induced by the inherent aggregation associated with discrete-time sampling of continuous time processes experiencing structural change, We also address the issue of how time irreversible structures may be generated within the smooth transition processes. (c) 2005 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The consensus from published studies is that plasma lipids are each influenced by genetic factors, and that this contributes to genetic variation in risk of cardiovascular disease. Heritability estimates for lipids and lipoproteins are in the range .48 to .87, when measured once per study participant. However, this ignores the confounding effects of biological variation measurement error and ageing, and a truer assessment of genetic effects on cardiovascular risk may be obtained from analysis of longitudinal twin or family data. We have analyzed information on plasma high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol, and triglycerides, from 415 adult twins who provided blood on two to five occasions over 10 to 17 years. Multivariate modeling of genetic and environmental contributions to variation within and across occasions was used to assess the extent to which genetic and environmental factors have long-term effects on plasma lipids. Results indicated that more than one genetic factor influenced HDL and LDL components of cholesterol, and triglycerides over time in all studies. Nonshared environmental factors did not have significant long-term effects except for HDL. We conclude that when heritability of lipid risk factors is estimated on only one occasion, the existence of biological variation and measurement errors leads to underestimation of the importance of genetic factors as a cause of variation in long-term risk within the population. In addition our data suggest that different genes may affect the risk profile at different ages.