897 resultados para Cox regression model
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
Considering the so-called "multinomial discrete choice" model the focus of this paper is on the estimation problem of the parameters. Especially, the basic question arises how to carry out the point and interval estimation of the parameters when the model is mixed i.e. includes both individual and choice-specific explanatory variables while a standard MDC computer program is not available for use. The basic idea behind the solution is the use of the Cox-proportional hazards method of survival analysis which is available in any standard statistical package and provided a data structure satisfying certain special requirements it yields the MDC solutions desired. The paper describes the features of the data set to be analysed.
Resumo:
The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^
Resumo:
Factors associated with duration of dementia in a consecutive series of 103 Alzheimer's disease (AD) cases were studied using the Kaplan-Meier estimator and Cox regression analysis (proportional hazard model). Mean disease duration was 7.1 years (range: 6 weeks-30 years, standard deviation = 5.18); 25% of cases died within four years, 50% within 6.9 years, and 75% within 10 years. Familial AD cases (FAD) had a longer duration than sporadic cases (SAD), especially cases linked to presenilin (PSEN) genes. No significant differences in duration were associated with age, sex, or apolipoprotein E (Apo E) genotype. Duration was reduced in cases with arterial hypertension. Cox regression analysis suggested longer duration was associated with an earlier disease onset and increased senile plaque (SP) and neurofibrillary tangle (NFT) pathology in the orbital gyrus (OrG), CA1 sector of the hippocampus, and nucleus basalis of Meynert (NBM). The data suggest shorter disease duration in SAD and in cases with hypertensive comorbidity. In addition, degree of neuropathology did not influence survival, but spread of SP/NFT pathology into the frontal lobe, hippocampus, and basal forebrain was associated with longer disease duration. © 2014 R. A. Armstrong.
Resumo:
Land-use regression (LUR) is a technique that can improve the accuracy of air pollution exposure assessment in epidemiological studies. Most LUR models are developed for single cities, which places limitations on their applicability to other locations. We sought to develop a model to predict nitrogen dioxide (NO2) concentrations with national coverage of Australia by using satellite observations of tropospheric NO2 columns combined with other predictor variables. We used a generalised estimating equation (GEE) model to predict annual and monthly average ambient NO2 concentrations measured by a national monitoring network from 2006 through 2011. The best annual model explained 81% of spatial variation in NO2 (absolute RMS error=1.4 ppb), while the best monthly model explained 76% (absolute RMS error=1.9 ppb). We applied our models to predict NO2 concentrations at the ~350,000 census mesh blocks across the country (a mesh block is the smallest spatial unit in the Australian census). National population-weighted average concentrations ranged from 7.3 ppb (2006) to 6.3 ppb (2011). We found that a simple approach using tropospheric NO2 column data yielded models with slightly better predictive ability than those produced using a more involved approach that required simulation of surface-to-column ratios. The models were capable of capturing within-urban variability in NO2, and offer the ability to estimate ambient NO2 concentrations at monthly and annual time scales across Australia from 2006–2011. We are making our model predictions freely available for research.
Resumo:
Large multisite efforts (e.g., the ENIGMA Consortium), have shown that neuroimaging traits including tract integrity (from DTI fractional anisotropy, FA) and subcortical volumes (from T1-weighted scans) are highly heritable and promising phenotypes for discovering genetic variants associated with brain structure. However, genetic correlations (rg) among measures from these different modalities for mapping the human genome to the brain remain unknown. Discovering these correlations can help map genetic and neuroanatomical pathways implicated in development and inherited risk for disease. We use structural equation models and a twin design to find rg between pairs of phenotypes extracted from DTI and MRI scans. When controlling for intracranial volume, the caudate as well as related measures from the limbic system - hippocampal volume - showed high rg with the cingulum FA. Using an unrelated sample and a Seemingly Unrelated Regression model for bivariate analysis of this connection, we show that a multivariate GWAS approach may be more promising for genetic discovery than a univariate approach applied to each trait separately.
Resumo:
Chemical composition of rainwater changes from sea to inland under the influence of several major factors - topographic location of area, its distance from sea, annual rainfall. A model is developed here to quantify the variation in precipitation chemistry under the influence of inland distance and rainfall amount. Various sites in India categorized as 'urban', 'suburban' and 'rural' have been considered for model development. pH, HCO3, NO3 and Mg do not change much from coast to inland while, SO4 and Ca change is subjected to local emissions. Cl and Na originate solely from sea salinity and are the chemistry parameters in the model. Non-linear multiple regressions performed for the various categories revealed that both rainfall amount and precipitation chemistry obeyed a power law reduction with distance from sea. Cl and Na decrease rapidly for the first 100 km distance from sea, then decrease marginally for the next 100 km, and later stabilize. Regression parameters estimated for different cases were found to be consistent (R-2 similar to 0.8). Variation in one of the parameters accounted for urbanization. Model was validated using data points from the southern peninsular region of the country. Estimates are found to be within 99.9% confidence interval. Finally, this relationship between the three parameters - rainfall amount, coastline distance, and concentration (in terms of Cl and Na) was validated with experiments conducted in a small experimental watershed in the south-west India. Chemistry estimated using the model was in good correlation with observed values with a relative error of similar to 5%. Monthly variation in the chemistry is predicted from a downscaling model and then compared with the observed data. Hence, the model developed for rain chemistry is useful in estimating the concentrations at different spatio-temporal scales and is especially applicable for south-west region of India. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Traffic-related air pollution has been associated with a wide range of adverse health effects. One component of traffic emissions that has been receiving increasing attention is ultrafine particles(UFP, < 100 nm), which are of concern to human health due to their small diameters. Vehicles are the dominant source of UFP in urban environments. Small-scale variation in ultrafine particle number concentration (PNC) can be attributed to local changes in land use and road abundance. UFPs are also formed as a result of particle formation events. Modelling the spatial patterns in PNC is integral to understanding human UFP exposure and also provides insight into particle formation mechanisms that contribute to air pollution in urban environments. Land-use regression (LUR) is a technique that can use to improve the prediction of air pollution.
Resumo:
Climate change in response to a change in external forcing can be understood in terms of fast response to the imposed forcing and slow feedback associated with surface temperature change. Previous studies have investigated the characteristics of fast response and slow feedback for different forcing agents. Here we examine to what extent that fast response and slow feedback derived from time-mean results of climate model simulations can be used to infer total climate change. To achieve this goal, we develop a multivariate regression model of climate change, in which the change in a climate variable is represented by a linear combination of its sensitivity to CO2 forcing, solar forcing, and change in global mean surface temperature. We derive the parameters of the regression model using time-mean results from a set of HadCM3L climate model step-forcing simulations, and then use the regression model to emulate HadCM3L-simulated transient climate change. Our results show that the regression model emulates well HadCM3L-simulated temporal evolution and spatial distribution of climate change, including surface temperature, precipitation, runoff, soil moisture, cloudiness, and radiative fluxes under transient CO2 and/or solar forcing scenarios. Our findings suggest that temporal and spatial patterns of total change for the climate variables considered here can be represented well by the sum of fast response and slow feedback. Furthermore, by using a simple 1-D heat-diffusion climate model, we show that the temporal and spatial characteristics of climate change under transient forcing scenarios can be emulated well using information from step-forcing simulations alone.
Resumo:
EXTRACT (SEE PDF FOR FULL ABSTRACT): A local climate model (LCM) has been developed to simulate the modern and 18 ka climate of the southwestern United States. ... LCM solutions indicate summers were about 1°C cooler and winters 11°C cooler at 18 ka. Annual PREC increased 68% at 18 ka, with large increases in spring and fall PREC and diminished summer monsoonal PREC. ... Validation of simulations of 18 ka climate indicate general agreement with proxy estimates of climate for that time. However, the LCM estimates of summer temperatures are about 5 to 10°C higher than estimates from proxy reconstructions.