10 resultados para variance and coherence
em DigitalCommons@The Texas Medical Center
Resumo:
The electroencephalogram (EEG) is a physiological time series that measures electrical activity at different locations in the brain, and plays an important role in epilepsy research. Exploring the variance and/or volatility may yield insights for seizure prediction, seizure detection and seizure propagation/dynamics.^ Maximal Overlap Discrete Wavelet Transforms (MODWTs) and ARMA-GARCH models were used to determine variance and volatility characteristics of 66 channels for different states of an epileptic EEG – sleep, awake, sleep-to-awake and seizure. The wavelet variances, changes in wavelet variances and volatility half-lives for the four states were compared for possible differences between seizure and non-seizure channels.^ The half-lives of two of the three seizure channels were found to be shorter than all of the non-seizure channels, based on 95% CIs for the pre-seizure and awake signals. No discernible patterns were found the wavelet variances of the change points for the different signals. ^
Resumo:
Radiotherapy involving the thoracic cavity and chemotherapy with the drug bleomycin are both dose limited by the development of pulmonary fibrosis. From evidence that there is variation in the population in susceptibility to pulmonary fibrosis, and animal data, it was hypothesized that individual variation in susceptibility to bleomycin-induced, or radiation-induced, pulmonary fibrosis is, in part, genetically controlled. In this thesis a three generation mouse genetic model of C57BL/6J (fibrosis prone) and C3Hf/Kam (fibrosis resistant) mouse strains and F1 and F2 (F1 intercross) progeny derived from the parental strains was developed to investigate the genetic basis of susceptibility to fibrosis. In the bleomycin studies the mice received 100 mg/kg (125 for females) of bleomycin, via mini osmotic pump. The animals were sacrificed at eight weeks following treatment or when their breathing rate indicated respiratory distress. In the radiation studies the mice were given a single dose of 14 or 16 Gy (Co$\sp{60})$ to the whole thorax and were sacrificed when moribund. The phenotype was defined as the percent of fibrosis area in the left lung as quantified with image analysis of histological sections. Quantitative trait loci (QTL) mapping was used to identify the chromosomal location of genes which contribute to susceptibility to bleomycin-induced pulmonary fibrosis in C57BL/6J mice compared to C3Hf/Kam mice and to determine if the QTL's which influence susceptibility to bleomycin-induced lung fibrosis in these progenitor strains could be implicated in susceptibility to radiation-induced lung fibrosis. For bleomycin, a genome wide scan revealed QTL's on chromosome 17, at the MHC, (LOD = 11.7 for males and 7.2 for females) accounting for approximately 21% of the phenotypic variance, and on chromosome 11 (LOD = 4.9), in male mice only, adding 8% of phenotypic variance. The bleomycin QTL on chromosome 17 was also implicated for susceptibility to radiation-induced fibrosis (LOD = 5.0) and contributes 7% of the phenotypic variance in the radiation study. In conclusion, susceptibility to both bleomycin-induced and radiation-induced pulmonary fibrosis are heritable traits, and are influenced by a genetic factor which maps to a genomic region containing the MHC. ^
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
Coronary heart disease remains the leading cause of death in the United States and increased blood cholesterol level has been found to be a major risk factor with roots in childhood. Tracking of cholesterol, i.e., the tendency to maintain a particular cholesterol level relative to the rest of the population, and variability in blood lipid levels with increase in age have implications for cholesterol screening and assessment of lipid levels in children for possible prevention of further rise to prevent adulthood heart disease. In this study the pattern of change in plasma lipids, over time, and their tracking were investigated. Also, within-person variance and retest reliability defined as the square root of within-person variance for plasma total cholesterol, HDL-cholesterol, LDL-cholesterol, and triglycerides and their relation to age, sex and body mass index among participants from age 8 to 18 years were investigated. ^ In Project HeartBeat!, 678 healthy children aged 8, 11 and 14 years at baseline were enrolled and examined at 4-monthly intervals for up to 4 years. We examined the relationship between repeated observations by Pearson's correlations. Age- and sex-specific quintiles were calculated and the probability of participants to remain in the uppermost quintile of their respective distribution was evaluated with life table methods. Plasma total cholesterol, HDL-C and LDL-C at baseline were strongly and significantly correlated with measurements at subsequent visits across the sex and age groups. Plasma triglyceride at baseline was also significantly correlated with subsequent measurements but less strongly than was the case for other plasma lipids. The probability to remain in the upper quintile was also high (60 to 70%) for plasma total cholesterol, HDL-C and LDL-C. ^ We used a mixed longitudinal, or synthetic cohort design with continuous observations from age 8 to 18 years to estimate within person variance of plasma total cholesterol, HDL-C, LDL-C and triglycerides. A total of 5809 measurements were available for both cholesterol and triglycerides. A multilevel linear model was used. Within-person variance among repeated measures over up to four years of follow-up was estimated for total cholesterol, HDL-C, LDL-C and triglycerides separately. The relationship of within-person and inter-individual variance with age, sex, and body mass index was evaluated. Likelihood ratio tests were conducted by calculating the deviation of −2log (likelihood) within the basic model and alternative models. The square root of within-person variance provided the retest reliability (within person standard deviation) for plasma total cholesterol, HDL-C, LDL-C and triglycerides. We found 13.6 percent retest reliability for plasma cholesterol, 6.1 percent for HDL-cholesterol, 11.9 percent for LDL-cholesterol and 32.4 percent for triglycerides. Retest reliability of plasma lipids was significantly related with age and body mass index. It increased with increase in body mass index and age. These findings have implications for screening guidelines, as participants in the uppermost quintile tended to maintain their status in each of the age groups during a four-year follow-up. The magnitude of within-person variability of plasma lipids influences the ability to classify children into risk categories recommended by the National Cholesterol Education Program. ^
Resumo:
The purpose of this study is to investigate the effects of predictor variable correlations and patterns of missingness with dichotomous and/or continuous data in small samples when missing data is multiply imputed. Missing data of predictor variables is multiply imputed under three different multivariate models: the multivariate normal model for continuous data, the multinomial model for dichotomous data and the general location model for mixed dichotomous and continuous data. Subsequent to the multiple imputation process, Type I error rates of the regression coefficients obtained with logistic regression analysis are estimated under various conditions of correlation structure, sample size, type of data and patterns of missing data. The distributional properties of average mean, variance and correlations among the predictor variables are assessed after the multiple imputation process. ^ For continuous predictor data under the multivariate normal model, Type I error rates are generally within the nominal values with samples of size n = 100. Smaller samples of size n = 50 resulted in more conservative estimates (i.e., lower than the nominal value). Correlation and variance estimates of the original data are retained after multiple imputation with less than 50% missing continuous predictor data. For dichotomous predictor data under the multinomial model, Type I error rates are generally conservative, which in part is due to the sparseness of the data. The correlation structure for the predictor variables is not well retained on multiply-imputed data from small samples with more than 50% missing data with this model. For mixed continuous and dichotomous predictor data, the results are similar to those found under the multivariate normal model for continuous data and under the multinomial model for dichotomous data. With all data types, a fully-observed variable included with variables subject to missingness in the multiple imputation process and subsequent statistical analysis provided liberal (larger than nominal values) Type I error rates under a specific pattern of missing data. It is suggested that future studies focus on the effects of multiple imputation in multivariate settings with more realistic data characteristics and a variety of multivariate analyses, assessing both Type I error and power. ^
Resumo:
Trauma and severe head injuries are important issues because they are prevalent, because they occur predominantly in the young, and because variations in clinical management may matter. Trauma is the leading cause of death for those under age 40. The focus of this head injury study is to determine if variations in time from the scene of accident to a trauma center hospital makes a difference in patient outcomes.^ A trauma registry is maintained in the Houston-Galveston area and includes all patients admitted to any one of three trauma center hospitals with mild or severe head injuries. A study cohort, derived from the Registry, includes 254 severe head injury cases, for 1980, with a Glasgow Coma Score of 8 or less.^ Multiple influences relate to patient outcomes from severe head injury. Two primary variables and four confounding variables are identified, including time to emergency room, time to intubation, patient age, severity of injury, type of injury and mode of transport to the emergency room. Regression analysis, analysis of variance, and chi-square analysis were the principal statistical methods utilized.^ Analysis indicates that within an urban setting, with a four-hour time span, variations in time to emergency room do not provide any strong influence or predictive value to patient outcome. However, data are suggestive that at longer time periods there is a negative influence on outcomes. Age is influential only when the older group (55-64) is included. Mode of transport (helicopter or ambulance) did not indicate any significant difference in outcome.^ In a multivariate regression model, outcomes are influenced primarily by severity of injury and age which explain 36% (R('2)) of variance. Inclusion of time to emergency room, time to intubation, transport mode and type injury add only 4% (R('2)) additional contribution to explaining variation in patient outcome.^ The research concludes that since the group most at risk to head trauma is the young adult male involved in automobile/motorcycle accidents, more may be gained by modifying driving habits and other preventive measures. Continuous clinical and evaluative research are required to provide updated clinical wisdom in patient management and trauma treatment protocols. A National Institute of Trauma may be required to develop a national public policy and evaluate the many medical, behavioral and social changes required to cope with the country's number 3 killer and the primary killer of young adults.^
Resumo:
A non-parametric method was developed and tested to compare the partial areas under two correlated Receiver Operating Characteristic curves. Based on the theory of generalized U-statistics the mathematical formulas have been derived for computing ROC area, and the variance and covariance between the portions of two ROC curves. A practical SAS application also has been developed to facilitate the calculations. The accuracy of the non-parametric method was evaluated by comparing it to other methods. By applying our method to the data from a published ROC analysis of CT image, our results are very close to theirs. A hypothetical example was used to demonstrate the effects of two crossed ROC curves. The two ROC areas are the same. However each portion of the area between two ROC curves were found to be significantly different by the partial ROC curve analysis. For computation of ROC curves with large scales, such as a logistic regression model, we applied our method to the breast cancer study with Medicare claims data. It yielded the same ROC area computation as the SAS Logistic procedure. Our method also provides an alternative to the global summary of ROC area comparison by directly comparing the true-positive rates for two regression models and by determining the range of false-positive values where the models differ. ^
Resumo:
Background. Research into methods for recovery from fatigue due to exercise is a popular topic among sport medicine, kinesiology and physical therapy. However, both the quantity and quality of studies and a clear solution of recovery are lacking. An analysis of the statistical methods in the existing literature of performance recovery can enhance the quality of research and provide some guidance for future studies. Methods: A literature review was performed using SCOPUS, SPORTDiscus, MEDLINE, CINAHL, Cochrane Library and Science Citation Index Expanded databases to extract the studies related to performance recovery from exercise of human beings. Original studies and their statistical analysis for recovery methods including Active Recovery, Cryotherapy/Contrast Therapy, Massage Therapy, Diet/Ergogenics, and Rehydration were examined. Results: The review produces a Research Design and Statistical Method Analysis Summary. Conclusion: Research design and statistical methods can be improved by using the guideline from the Research Design and Statistical Method Analysis Summary. This summary table lists the potential issues and suggested solutions, such as, sample size calculation, sports specific and research design issues consideration, population and measure markers selection, statistical methods for different analytical requirements, equality of variance and normality of data, post hoc analyses and effect size calculation.^
Resumo:
A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^
Resumo:
A discussion of nonlinear dynamics, demonstrated by the familiar automobile, is followed by the development of a systematic method of analysis of a possibly nonlinear time series using difference equations in the general state-space format. This format allows recursive state-dependent parameter estimation after each observation thereby revealing the dynamics inherent in the system in combination with random external perturbations.^ The one-step ahead prediction errors at each time period, transformed to have constant variance, and the estimated parametric sequences provide the information to (1) formally test whether time series observations y(,t) are some linear function of random errors (ELEM)(,s), for some t and s, or whether the series would more appropriately be described by a nonlinear model such as bilinear, exponential, threshold, etc., (2) formally test whether a statistically significant change has occurred in structure/level either historically or as it occurs, (3) forecast nonlinear system with a new and innovative (but very old numerical) technique utilizing rational functions to extrapolate individual parameters as smooth functions of time which are then combined to obtain the forecast of y and (4) suggest a measure of resilience, i.e. how much perturbation a structure/level can tolerate, whether internal or external to the system, and remain statistically unchanged. Although similar to one-step control, this provides a less rigid way to think about changes affecting social systems.^ Applications consisting of the analysis of some familiar and some simulated series demonstrate the procedure. Empirical results suggest that this state-space or modified augmented Kalman filter may provide interesting ways to identify particular kinds of nonlinearities as they occur in structural change via the state trajectory.^ A computational flow-chart detailing computations and software input and output is provided in the body of the text. IBM Advanced BASIC program listings to accomplish most of the analysis are provided in the appendix. ^