10 resultados para CONTINUOUS VARIABLE SYSTEMS
em DigitalCommons@The Texas Medical Center
Resumo:
Background. Risk factors underlying the development of Barrett's esophagus (BE) are poorly understood. Recent studies have examined the association between elevated body mass index (BMI) and BE with conflicting results. A systematic review of literature was performed to study this association.^ Methods. Cross-sectional, case control and cohort studies published through April 2007 meeting strict inclusion and exclusion criteria were included. A thorough data abstraction, including that of reported crude or adjusted odds ratios or mean BMI, was performed. Crude odds ratios were estimated from available information in 3 studies.^ Results. Of 630 publications identified by our search items, 59 were reviewed in detail and 12 included in the final analyses. 3 studies showed a statistically significant association between obesity and BE (30-32) while 2 studies found a statistically significant association between overweight and BE (31, 32). Two studies that reported BMI as a continuous variable found BMI in cases to be significantly higher than that in the comparison group (30, 32). Other studies failed to show an significant association between elevated BMI and BE.^ Conclusions. There is conflicting data regarding the association between elevated BMI and BE. It is important to identify other risk factors that in combination with elevated BMI may lead to BE. Further studies are needed to evaluate if the presence of reflux symptoms or any particular pattern of obesity, are independently associated with BE.^ Key words. Barrett's esophagus, obesity, Body Mass Index, gastroesophageal reflux disease, meta-analysis^
Resumo:
The cross-sectional study was performed to quantify the prevalence of symtomatology in residents of mobile homes as a function of indoor formaldehyde concentration. Formaldehyde concentrations were monitored for a seven hour period with an automated wet-chemical colorimetric analyzer. The health status of family members was ascertained by administration of questionnaires and physical exams. This is the first investigation to perform clinical assessments on residents undergoing concurrent exposure assessment in the home.^ Only 22.8% of households eligible for participation chose to cooperate. Monitoring data and health evaluations were obtained from 155 households in four Texas counties. A total of 428 residents (86.1%) were available for examination during the sampling hours. The study population included 45 infants, 126 children, and 257 adults.^ Formaldehyde concentration was not found to be significantly associated with increased risks for symptoms and signs of ocular irritation, dermal anomalies, or malaise. Three associations were identified that warrant further investigation. The relative odds associated with a doubling of formaldehyde concentration was significantly associated with parenchymal rales in adults and children. However, risk was modified by log respirable suspended particulate concentrations. Due to the presence of modification by a continuous variable, prevalence odds ratios (POR) and 95% confidence intervals (95% CI) for these associations are presented in tables. A doubling of formaldehyde concentration was also associated with an increased risk of perceived tightness in the chest in adults. Prevalence odds ratios are presented in a table due to effect modification by the average number of hours spent indoors on weekdays. Furthermore, a doubling of formaldehyde concentration was associated with an increased risk of drowsiness in children (POR = 2.60; 95% CI 1.04-6.51) and adults (POR = 1.94; 95% CI 1.20-3.14). ^
Resumo:
Renal insufficiency is one of the most common co-morbidities present in heart failure (HF) patients. It has significant impact on mortality and adverse outcomes. Cystatin C has been shown as a promising marker of renal function. A systematic review of all the published studies evaluating the prognostic role of cystatin C in both acute and chronic HF was undertaken. A comprehensive literature search was conducted involving various terms of 'cystatin C' and 'heart failure' in Pubmed medline and Embase libraries using Scopus database. A total of twelve observational studies were selected in this review for detailed assessment. Six studies were performed in acute HF patients and six were performed in chronic HF patients. Cystatin C was used as a continuous variable, as quartiles/tertiles or as a categorical variable in these studies. Different mortality endpoints were reported in these studies. All twelve studies demonstrated a significant association of cystatin C with mortality. This association was found to be independent of other baseline risk factors that are known to impact HF outcomes. In both acute and chronic HF, cystatin C was not only a strong predictor of outcomes but also a better prognostic marker than creatinine and estimated glomerular filtration rate (eGFR). A combination of cystatin C with other biomarkers such as N terminal pro B- type natriuretic peptide (NT-proBNP) or creatinine also improved the risk stratification. The plausible mechanisms are renal dysfunction, inflammation or a direct effect of cystatin C on ventricular remodeling. Either alone or in combination, cystatin C is a better, accurate and a reliable biomarker for HF prognosis. ^
Resumo:
This study analyzed the relationship between fasting blood glucose (FBG) and 8-year mortality in the Hypertension Detection Follow-up Program (HDFP) population. Fasting blood glucose (FBG) was examined both as a continuous variable and by specified FBG strata: Normal (FBG 60–100 mg/dL), Impaired (FBG ≥100 and ≤125 mg/dL), and Diabetic (FBG>125 mg/dL or pre-existing diabetes) subgroups. The relationship between type 2 diabetes was examined with all-cause mortality. This thesis described and compared the characteristics of fasting blood glucose strata by recognized glucose cut-points; described the mortality rates in the various fasting blood glucose strata using Kaplan-Meier mortality curves, and compared the mortality risk of various strata using Cox Regression analysis. Overall, mortality was significantly greater among Referred Care (RC) participants compared to Stepped Care (SC) {HR = 1.17; 95% CI (1.052,1.309); p-value = 0.004}, as reported by the HDFP investigators in 1979. Compared with SC participants, the RC mortality rate was significantly higher for the Normal FBG group {HR = 1.18; 95% CI (1.029,1.363); p-value = 0.019} and the Impaired FBG group, {HR = 1.34; 95% CI (1.036,1.734); p-value = 0.026,}. However, for the diabetic group, 8-year mortality did not differ significantly between the RC and SC groups after adjusting for race, gender, age, smoking status among Diabetic individuals {HR = 1.03; 95% CI (0.816,1.303); p-value = 0.798}. This latter finding is possibly due to a lack of a treatment difference of hypertension among Diabetic participants in both RC and SC groups. The largest difference in mortality between RC and SC was in the Impaired subgroup, suggesting that hypertensive patients with FBG between 100 and 125 mg/dL would benefit from aggressive antihypertensive therapy.^
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
The performance of the Hosmer-Lemeshow global goodness-of-fit statistic for logistic regression models was explored in a wide variety of conditions not previously fully investigated. Computer simulations, each consisting of 500 regression models, were run to assess the statistic in 23 different situations. The items which varied among the situations included the number of observations used in each regression, the number of covariates, the degree of dependence among the covariates, the combinations of continuous and discrete variables, and the generation of the values of the dependent variable for model fit or lack of fit.^ The study found that the $\rm\ C$g* statistic was adequate in tests of significance for most situations. However, when testing data which deviate from a logistic model, the statistic has low power to detect such deviation. Although grouping of the estimated probabilities into quantiles from 8 to 30 was studied, the deciles of risk approach was generally sufficient. Subdividing the estimated probabilities into more than 10 quantiles when there are many covariates in the model is not necessary, despite theoretical reasons which suggest otherwise. Because it does not follow a X$\sp2$ distribution, the statistic is not recommended for use in models containing only categorical variables with a limited number of covariate patterns.^ The statistic performed adequately when there were at least 10 observations per quantile. Large numbers of observations per quantile did not lead to incorrect conclusions that the model did not fit the data when it actually did. However, the statistic failed to detect lack of fit when it existed and should be supplemented with further tests for the influence of individual observations. Careful examination of the parameter estimates is also essential since the statistic did not perform as desired when there was moderate to severe collinearity among covariates.^ Two methods studied for handling tied values of the estimated probabilities made only a slight difference in conclusions about model fit. Neither method split observations with identical probabilities into different quantiles. Approaches which create equal size groups by separating ties should be avoided. ^
Resumo:
In this dissertation, we propose a continuous-time Markov chain model to examine the longitudinal data that have three categories in the outcome variable. The advantage of this model is that it permits a different number of measurements for each subject and the duration between two consecutive time points of measurements can be irregular. Using the maximum likelihood principle, we can estimate the transition probability between two time points. By using the information provided by the independent variables, this model can also estimate the transition probability for each subject. The Monte Carlo simulation method will be used to investigate the goodness of model fitting compared with that obtained from other models. A public health example will be used to demonstrate the application of this method. ^