9 resultados para Log-linear Approach
em Collection Of Biostatistics Research Archive
Resumo:
This paper proposes Poisson log-linear multilevel models to investigate population variability in sleep state transition rates. We specifically propose a Bayesian Poisson regression model that is more flexible, scalable to larger studies, and easily fit than other attempts in the literature. We further use hierarchical random effects to account for pairings of individuals and repeated measures within those individuals, as comparing diseased to non-diseased subjects while minimizing bias is of epidemiologic importance. We estimate essentially non-parametric piecewise constant hazards and smooth them, and allow for time varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming piecewise constant hazards. This relationship allows us to synthesize two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed.
Resumo:
In environmental epidemiology, exposure X and health outcome Y vary in space and time. We present a method to diagnose the possible influence of unmeasured confounders U on the estimated effect of X on Y and to propose several approaches to robust estimation. The idea is to use space and time as proxy measures for the unmeasured factors U. We start with the time series case where X and Y are continuous variables at equally-spaced times and assume a linear model. We define matching estimator b(u)s that correspond to pairs of observations with specific lag u. Controlling for a smooth function of time, St, using a kernel estimator is roughly equivalent to estimating the association with a linear combination of the b(u)s with weights that involve two components: the assumptions about the smoothness of St and the normalized variogram of the X process. When an unmeasured confounder U exists, but the model otherwise correctly controls for measured confounders, the excess variation in b(u)s is evidence of confounding by U. We use the plot of b(u)s versus lag u, lagged-estimator-plot (LEP), to diagnose the influence of U on the effect of X on Y. We use appropriate linear combination of b(u)s or extrapolate to b(0) to obtain novel estimators that are more robust to the influence of smooth U. The methods are extended to time series log-linear models and to spatial analyses. The LEP plot gives us a direct view of the magnitude of the estimators for each lag u and provides evidence when models did not adequately describe the data.
Resumo:
Permutation tests are useful for drawing inferences from imaging data because of their flexibility and ability to capture features of the brain that are difficult to capture parametrically. However, most implementations of permutation tests ignore important confounding covariates. To employ covariate control in a nonparametric setting we have developed a Markov chain Monte Carlo (MCMC) algorithm for conditional permutation testing using propensity scores. We present the first use of this methodology for imaging data. Our MCMC algorithm is an extension of algorithms developed to approximate exact conditional probabilities in contingency tables, logit, and log-linear models. An application of our non-parametric method to remove potential bias due to the observed covariates is presented.
Resumo:
Prospective cohort studies have provided evidence on longer-term mortality risks of fine particulate matter (PM2.5), but due to their complexity and costs, only a few have been conducted. By linking monitoring data to the U.S. Medicare system by county of residence, we developed a retrospective cohort study, the Medicare Air Pollution Cohort Study (MCAPS), comprising over 20 million enrollees in the 250 largest counties during 2000-2002. We estimated log-linear regression models having as outcome the age-specific mortality rate for each county and as the main predictor, the average level for the study period 2000. Area-level covariates were used to adjust for socio-economic status and smoking. We reported results under several degrees of adjustment for spatial confounding and with stratification into by eastern, central and western counties. We estimated that a 10 µg/m3 increase in PM25 is associated with a 7.6% increase in mortality (95% CI: 4.4 to 10.8%). We found a stronger association in the eastern counties than nationally, with no evidence of an association in western counties. When adjusted for spatial confounding, the estimated log-relative risks drop by 50%. We demonstrated the feasibility of using Medicare data to establish cohorts for follow-up for effects of air pollution. Particulate matter (PM) air pollution is a global public health problem (1). In developing countries, levels of airborne particles still reach concentrations at which serious health consequences are well-documented; in developed countries, recent epidemiologic evidence shows continued adverse effects, even though particle levels have declined in the last two decades (2-6). Increased mortality associated with higher levels of PM air pollution has been of particular concern, giving an imperative for stronger protective regulations (7). Evidence on PM and health comes from studies of acute and chronic adverse effects (6). The London Fog of 1952 provides dramatic evidence of the unacceptable short-term risk of extremely high levels of PM air pollution (8-10); multi-site time-series studies of daily mortality show that far lower levels of particles are still associated with short-term risk (5)(11-13). Cohort studies provide complementary evidence on the longer-term risks of PM air pollution, indicating the extent to which exposure reduces life expectancy. The design of these studies involves follow-up of cohorts for mortality over periods of years to decades and an assessment of mortality risk in association with estimated long-term exposure to air pollution (2-4;14-17). Because of the complexity and costs of such studies, only a small number have been conducted. The most rigorously executed, including the Harvard Six Cities Study and the American Cancer Society’s (ACS) Cancer Prevention Study II, have provided generally consistent evidence for an association of long- term exposure to particulate matter air pollution with increased all-cause and cardio-respiratory mortality (2,4,14,15). Results from these studies have been used in risk assessments conducted for setting the U.S. National Ambient Air Quality Standard (NAAQS) for PM and for estimating the global burden of disease attributable to air pollution (18,19). Additional prospective cohort studies are necessary, however, to confirm associations between long-term exposure to PM and mortality, to broaden the populations studied, and to refine estimates by regions across which particle composition varies. Toward this end, we have used data from the U.S. Medicare system, which covers nearly all persons 65 years of age and older in the United States. We linked Medicare mortality data to (particulate matter less than 2.5 µm in aerodynamic diameter) air pollution monitoring data to create a new retrospective cohort study, the Medicare Air Pollution Cohort Study (MCAPS), consisting of 20 million persons from 250 counties and representing about 50% of the US population of elderly living in urban settings. In this paper, we report on the relationship between longer-term exposure to PM2.5 and mortality risk over the period 2000 to 2002 in the MCAPS.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
Marginal generalized linear models can be used for clustered and longitudinal data by fitting a model as if the data were independent and using an empirical estimator of parameter standard errors. We extend this approach to data where the number of observations correlated with a given one grows with sample size and show that parameter estimates are consistent and asymptotically Normal with a slower convergence rate than for independent data, and that an information sandwich variance estimator is consistent. We present two problems that motivated this work, the modelling of patterns of HIV genetic variation and the behavior of clustered data estimators when clusters are large.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
Resumo:
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.
Resumo:
Medical errors originating in health care facilities are a significant source of preventable morbidity, mortality, and healthcare costs. Voluntary error report systems that collect information on the causes and contributing factors of medi- cal errors regardless of the resulting harm may be useful for developing effective harm prevention strategies. Some patient safety experts question the utility of data from errors that did not lead to harm to the patient, also called near misses. A near miss (a.k.a. close call) is an unplanned event that did not result in injury to the patient. Only a fortunate break in the chain of events prevented injury. We use data from a large voluntary reporting system of 836,174 medication errors from 1999 to 2005 to provide evidence that the causes and contributing factors of errors that result in harm are similar to the causes and contributing factors of near misses. We develop Bayesian hierarchical models for estimating the log odds of selecting a given cause (or contributing factor) of error given harm has occurred and the log odds of selecting the same cause given that harm did not occur. The posterior distribution of the correlation between these two vectors of log-odds is used as a measure of the evidence supporting the use of data from near misses and their causes and contributing factors to prevent medical errors. In addition, we identify the causes and contributing factors that have the highest or lowest log-odds ratio of harm versus no harm. These causes and contributing factors should also be a focus in the design of prevention strategies. This paper provides important evidence on the utility of data from near misses, which constitute the vast majority of errors in our data.