168 resultados para Biology, Biostatistics|Hydrology
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^
Resumo:
The application of Markov processes is very useful to health-care problems. The objective of this study is to provide a structured methodology of forecasting cost based upon combining a stochastic model of utilization (Markov Chain) and deterministic cost function. The perspective of the cost in this study is the reimbursement for the services rendered. The data to be used is the OneCare database of claim records of their enrollees over a two-year period of January 1, 1996–December 31, 1997. The model combines a Markov Chain that describes the utilization pattern and its variability where the use of resources by risk groups (age, gender, and diagnosis) will be considered in the process and a cost function determined from a fixed schedule based on real costs or charges for those in the OneCare claims database. The cost function is a secondary application to the model. Goodness-of-fit will be used checked for the model against the traditional method of cost forecasting. ^
Resumo:
Currently there is no general method to study the impact of population admixture within families on the assumptions of random mating and consequently, Hardy-Weinberg equilibrium (HWE) and linkage equilibrium (LE) and on the inference obtained from traditional linkage analysis. ^ First, through simulation, the effect of admixture of two populations on the log of the odds (LOD) score was assessed, using Prostate Cancer as the typical disease model. Comparisons between simulated mixed and homogeneous families were performed. LOD scores under both models of admixture (within families and within a data set of homogeneous families) were closest to the homogeneous family scores of the population having the highest mixing proportion. Random sampling of families or ascertainment of families with disease affection status did not affect this observation, nor did the mode of inheritance (dominant/recessive) or sample size. ^ Second, after establishing the effect of admixture on the LOD score and inference for linkage, the presence of induced disequilibria by population admixture within families was studied and an adjustment procedure was developed. The adjustment did not force all disequilibria to disappear but because the families were adjusted for the population admixture, those replicates where the disequilibria exist are no longer affected by the disequilibria in terms of maximization for linkage. Furthermore, the adjustment was able to exclude uninformative families or families that had such a high departure from HWE and/or LE that their LOD scores were not reliable. ^ Together these observations imply that the presence of families of mixed population ancestry impacts linkage analysis in terms of the LOD score and the estimate of the recombination fraction. ^
Resumo:
Despite many researches on development in education and psychology, not often is the methodology tested with real data. A major barrier to test the growth model is that the design of study includes repeated observations and the nature of the growth is nonlinear. The repeat measurements on a nonlinear model require sophisticated statistical methods. In this study, we present mixed effects model in a negative exponential curve to describe the development of children's reading skills. This model can describe the nature of the growth on children's reading skills and account for intra-individual and inter-individual variation. We also apply simple techniques including cross-validation, regression, and graphical methods to determine the most appropriate curve for data, to find efficient initial values of parameters, and to select potential covariates. We illustrate with an example that motivated this research: a longitudinal study of academic skills from grade 1 to grade 12 in Connecticut public schools. ^
Resumo:
The genetic etiology of stroke likely reflects the influence of multiple loci with small effects, each modulating different pathophysiological processes. This research project utilized three analytical strategies to address the paucity of information related to the identification and characterization of genetic variation associated with stroke in the general population. ^ First, the general contribution of familial factors to stroke susceptibility was evaluated in a population-based sample of unrelated individuals. Increased risk of subclinical cerebral infarction was observed among individuals with a positive parental history of stroke. This association did not appear to be mediated by established stroke risk factors, specifically blood pressure levels or hypertension status. ^ The need to identify specific gene variation associated with stroke in the general population was addressed by evaluating seven candidate gene polymorphisms in a population-based sample of unrelated individuals. Three polymorphisms were significantly associated with increased subclinical cerebral infarction or incident clinical ischemic stroke risk. These relationships include the G-protein β3 subunit 825C/T polymorphism and clinical stroke in Whites, the lipoprotein lipase S/X447 polymorphism and subclinical and clinical stroke in men, and the angiotensin I-converting enzyme Ins/Del polymorphism and subclinical stroke in White men. These associations did not appear to be obfuscated by the stroke risk factors adjusted for in the analysis models specifically blood pressure levels or anti-hypertensive medication use. ^ The final research strategy considered, on a genome-wide scale, the idea that genetic variation may contribute to the occurrence of hypertension or stroke through a common etiologic pathway. Genomic regions were identified for which significant evidence of heterogeneity was observed among hypertensive sibpairs stratified by family history of stroke information. Regions identified on chromosome 15 in African Americans, and chromosome 13 in Whites and African Americans, suggest the presence of genes influencing hypertension and stroke susceptibility. ^ Insight into the role of genetics in stroke is useful for the potential early identification of individuals at increased risk for stroke and improved understanding of the etiology of the disease. The ultimate goal of these endeavors is to guide the development of therapeutic intervention and informed prevention to provide a lasting and positive impact on public health. ^
Resumo:
The main objective of this study was to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that can be translated into a simple scoring system in order to ascertain stroke cases using hospital admission medical records data. This algorithm, the Risk Index Score (RISc), was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christ (BASIC) project. The validity of the RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment accomplished by physician review of hospital admission records. The goal of this study was to develop a rapid, simple, efficient, and accurate method to ascertain the incidence of stroke from routine hospital admission hospital admission records for epidemiologic investigations. ^ The main objectives of this study were to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that could be translated into a simple scoring system to ascertain stroke cases using hospital admission medical records data. (Abstract shortened by UMI.)^
Resumo:
Hierarchically clustered populations are often encountered in public health research, but the traditional methods used in analyzing this type of data are not always adequate. In the case of survival time data, more appropriate methods have only begun to surface in the last couple of decades. Such methods include multilevel statistical techniques which, although more complicated to implement than traditional methods, are more appropriate. ^ One population that is known to exhibit a hierarchical structure is that of patients who utilize the health care system of the Department of Veterans Affairs where patients are grouped not only by hospital, but also by geographic network (VISN). This project analyzes survival time data sets housed at the Houston Veterans Affairs Medical Center Research Department using two different Cox Proportional Hazards regression models, a traditional model and a multilevel model. VISNs that exhibit significantly higher or lower survival rates than the rest are identified separately for each model. ^ In this particular case, although there are differences in the results of the two models, it is not enough to warrant using the more complex multilevel technique. This is shown by the small estimates of variance associated with levels two and three in the multilevel Cox analysis. Much of the differences that are exhibited in identification of VISNs with high or low survival rates is attributable to computer hardware difficulties rather than to any significant improvements in the model. ^
Resumo:
Do siblings of centenarians tend to have longer life spans? To answer this question, life spans of 184 siblings for 42 centenarians have been evaluated. Two important questions have been addressed in analyzing the sibling data. First, a standard needs to be established, to which the life spans of 184 siblings are compared. In this report, an external reference population is constructed from the U.S. life tables. Its estimated mortality rates are treated as baseline hazards from which the relative mortality of the siblings are estimated. Second, the standard survival models which assume independent observations are invalid when correlation within family exists, underestimating the true variance. Methods that allow correlations are illustrated by three different methods. First, the cumulative relative excess mortality between siblings and their comparison group is calculated and used as an effective graphic tool, along with the Product Limit estimator of the survival function. The variance estimator of the cumulative relative excess mortality is adjusted for the potential within family correlation using Taylor linearization approach. Second, approaches that adjust for the inflated variance are examined. They are adjusted one-sample log-rank test using design effect originally proposed by Rao and Scott in the correlated binomial or Poisson distribution setting and the robust variance estimator derived from the log-likelihood function of a multiplicative model. Nether of these two approaches provide correlation estimate within families, but the comparison with the comparison with the standard remains valid under dependence. Last, using the frailty model concept, the multiplicative model, where the baseline hazards are known, is extended by adding a random frailty term that is based on the positive stable or the gamma distribution. Comparisons between the two frailty distributions are performed by simulation. Based on the results from various approaches, it is concluded that the siblings of centenarians had significant lower mortality rates as compared to their cohorts. The frailty models also indicate significant correlations between the life spans of the siblings. ^
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
The current study investigated data quality and estimated cancer incidence and mortality rates using data provided by Pavlodar, Semipalatinsk and Ust-Kamenogorsk Regional Cancer Registries of Kazakhstan during the period of 1996–1998. Assessment of data quality was performed using standard quality indicators including internal database checks, proportion of cases verified from death certificates only, mortality:incidence ratio, data patterns, proportion of cases with unknown primary site, proportion of cases with unknown age. Crude and age-adjusted incidence and mortality rates and 95% confidence intervals were calculated, by gender, for all cancers combined and for 28 specific cancer sites for each year of the study period. The five most frequent cancers were identified and described for every population. The results of the study provide the first simultaneous assessment of data quality and standardized incidence and mortality rates for Kazakh cancer registries. ^
Resumo:
Background. Diabetes places a significant burden on the health care system. Reduction in blood glucose levels (HbA1c) reduces the risk of complications; however, little is known about the impact of disease management programs on medical costs for patients with diabetes. In 2001, economic costs associated with diabetes totaled $100 billion, and indirect costs totaled $54 billion. ^ Objective. To compare outcomes of nurse case management by treatment algorithms with conventional primary care for glycemic control and cardiovascular risk factors in type 2 diabetic patients in a low-income Mexican American community-based setting, and to compare the cost effectiveness of the two programs. Patient compliance was also assessed. ^ Research design and methods. An observational group-comparison to evaluate a treatment intervention for type 2 diabetes management was implemented at three out-patient health facilities in San Antonio, Texas. All eligible type 2 diabetic patients attending the clinics during 1994–1996 became part of the study. Data were obtained from the study database, medical records, hospital accounting, and pharmacy cost lists, and entered into a computerized database. Three groups were compared: a Community Clinic Nurse Case Manager (CC-TA) following treatment algorithms, a University Clinic Nurse Case Manager (UC-TA) following treatment algorithms, and Primary Care Physicians (PCP) following conventional care practices at a Family Practice Clinic. The algorithms provided a disease management model specifically for hyperglycemia, dyslipidemia, hypertension, and microalbuminuria that progressively moved the patient toward ideal goals through adjustments in medication, self-monitoring of blood glucose, meal planning, and reinforcement of diet and exercise. Cost effectiveness of hemoglobin AI, final endpoints was compared. ^ Results. There were 358 patients analyzed: 106 patients in CC-TA, 170 patients in UC-TA, and 82 patients in PCP groups. Change in hemoglobin A1c (HbA1c) was the primary outcome measured. HbA1c results were presented at baseline, 6 and 12 months for CC-TA (10.4%, 7.1%, 7.3%), UC-TA (10.5%, 7.1%, 7.2%), and PCP (10.0%, 8.5%, 8.7%). Mean patient compliance was 81%. Levels of cost effectiveness were significantly different between clinics. ^ Conclusion. Nurse case management with treatment algorithms significantly improved glycemic control in patients with type 2 diabetes, and was more cost effective. ^
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
Coronary artery disease (CAD) is a multifactorial disease process involving behavioral, inflammatory, clinical, thrombotic, and genetic components. Previous epidemiologic studies focused on identifying behavioral and demographic risk factors of CAD, but none focused on platelets. Current platelet literature lacks the known effects of platelet function and platelet receptor polymorphisms on CAD. This case-control analysis addressed these issues by analyzing data collected for a previous study. Cases were individuals who had undergone CABG and thus had been diagnosed with CAD, while the controls were volunteers presumed to be CAD free. The platelet function variables analyzed included fibrinogen Von Willebrand Factor activity (VWF), shear-induced platelet aggregation (SIPA), sCD40L, and mean platelet volume; and the platelet polymorphisms studied included PIA, α2 807, Ko, Kozak, and VNTR. Univariate analysis found fibrinogen, VWF, SIPA, and PIA to be independent risk factors of CAD. Logistic regression was used to build a predictive model for CAD using the platelet function and platelet polymorphism data adjusted for age, sex, race, and current smoking status. A model containing only platelet polymorphisms and their respective receptor densities, found polymorphisms within GPIbα to be associated with CAD, yielding an 86% (95% C.I. 0.97–3.55) increased risk with the presence of at least 1 polymorphism in Ko, Kozak, or VNTR. Another model included both platelet function and platelet polymorphism data. Fibrinogen, the receptor density of GPIbα, and the polymorphism in GPIa-IIa (α2 807) were all associated with CAD with odds ratios of 1.10, 1.04, and 2.30 for fibrinogen (10mg/dl increase), GPIbα receptors (1 MFI increase), and GPIa-IIa, respectively. In addition, risk estimates and 99% confidence intervals adjusted for race were calculated to determine if the presence of a platelet receptor polymorphism was associated with CAD. The results were as follows: PIA (1.64, 0.74–3.65); α2 807 (1.35, 0.77–2.37); Ko (1.71, 0.70–4.16); Kozak (1.17, 0.54–2.52); and VNTR (1.24, 0.52–2.91). Although not statistically significant, all platelet polymorphisms were associated with an increased risk for CAD. These exploratory findings indicate that platelets do appear to have a role in atherosclerosis and that anti-platelet drugs targeting GPI-IIa and GPIbα may be better treatment candidates for individuals with CAD. ^
Resumo:
Improvements in the analysis of microarray images are critical for accurately quantifying gene expression levels. The acquisition of accurate spot intensities directly influences the results and interpretation of statistical analyses. This dissertation discusses the implementation of a novel approach to the analysis of cDNA microarray images. We use a stellar photometric model, the Moffat function, to quantify microarray spots from nylon microarray images. The inherent flexibility of the Moffat shape model makes it ideal for quantifying microarray spots. We apply our novel approach to a Wilms' tumor microarray study and compare our results with a fixed-circle segmentation approach for spot quantification. Our results suggest that different spot feature extraction methods can have an impact on the ability of statistical methods to identify differentially expressed genes. We also used the Moffat function to simulate a series of microarray images under various experimental conditions. These simulations were used to validate the performance of various statistical methods for identifying differentially expressed genes. Our simulation results indicate that tests taking into account the dependency between mean spot intensity and variance estimation, such as the smoothened t-test, can better identify differentially expressed genes, especially when the number of replicates and mean fold change are low. The analysis of the simulations also showed that overall, a rank sum test (Mann-Whitney) performed well at identifying differentially expressed genes. Previous work has suggested the strengths of nonparametric approaches for identifying differentially expressed genes. We also show that multivariate approaches, such as hierarchical and k-means cluster analysis along with principal components analysis, are only effective at classifying samples when replicate numbers and mean fold change are high. Finally, we show how our stellar shape model approach can be extended to the analysis of 2D-gel images by adapting the Moffat function to take into account the elliptical nature of spots in such images. Our results indicate that stellar shape models offer a previously unexplored approach for the quantification of 2D-gel spots. ^
Resumo:
When conducting a randomized comparative clinical trial, ethical, scientific or economic considerations often motivate the use of interim decision rules after successive groups of patients have been treated. These decisions may pertain to the comparative efficacy or safety of the treatments under study, cost considerations, the desire to accelerate the drug evaluation process, or the likelihood of therapeutic benefit for future patients. At the time of each interim decision, an important question is whether patient enrollment should continue or be terminated; either due to a high probability that one treatment is superior to the other, or a low probability that the experimental treatment will ultimately prove to be superior. The use of frequentist group sequential decision rules has become routine in the conduct of phase III clinical trials. In this dissertation, we will present a new Bayesian decision-theoretic approach to the problem of designing a randomized group sequential clinical trial, focusing on two-arm trials with time-to-failure outcomes. Forward simulation is used to obtain optimal decision boundaries for each of a set of possible models. At each interim analysis, we use Bayesian model selection to adaptively choose the model having the largest posterior probability of being correct, and we then make the interim decision based on the boundaries that are optimal under the chosen model. We provide a simulation study to compare this method, which we call Bayesian Doubly Optimal Group Sequential (BDOGS), to corresponding frequentist designs using either O'Brien-Fleming (OF) or Pocock boundaries, as obtained from EaSt 2000. Our simulation results show that, over a wide variety of different cases, BDOGS either performs at least as well as both OF and Pocock, or on average provides a much smaller trial. ^