8 resultados para Operation based method
em DigitalCommons@The Texas Medical Center
Resumo:
Genetic anticipation is defined as a decrease in age of onset or increase in severity as the disorder is transmitted through subsequent generations. Anticipation has been noted in the literature for over a century. Recently, anticipation in several diseases including Huntington's Disease, Myotonic Dystrophy and Fragile X Syndrome were shown to be caused by expansion of triplet repeats. Anticipation effects have also been observed in numerous mental disorders (e.g. Schizophrenia, Bipolar Disorder), cancers (Li-Fraumeni Syndrome, Leukemia) and other complex diseases. ^ Several statistical methods have been applied to determine whether anticipation is a true phenomenon in a particular disorder, including standard statistical tests and newly developed affected parent/affected child pair methods. These methods have been shown to be inappropriate for assessing anticipation for a variety of reasons, including familial correlation and low power. Therefore, we have developed family-based likelihood modeling approaches to model the underlying transmission of the disease gene and penetrance function and hence detect anticipation. These methods can be applied in extended families, thus improving the power to detect anticipation compared with existing methods based only upon parents and children. The first method we have proposed is based on the regressive logistic hazard model. This approach models anticipation by a generational covariate. The second method allows alleles to mutate as they are transmitted from parents to offspring and is appropriate for modeling the known triplet repeat diseases in which the disease alleles can become more deleterious as they are transmitted across generations. ^ To evaluate the new methods, we performed extensive simulation studies for data simulated under different conditions to evaluate the effectiveness of the algorithms to detect genetic anticipation. Results from analysis by the first method yielded empirical power greater than 87% based on the 5% type I error critical value identified in each simulation depending on the method of data generation and current age criteria. Analysis by the second method was not possible due to the current formulation of the software. The application of this method to Huntington's Disease and Li-Fraumeni Syndrome data sets revealed evidence for a generation effect in both cases. ^
Resumo:
Background. In public health preparedness, disaster preparedness refers to the strategic planning of responses to all types of disasters. Preparation and training for disaster response can be conducted using different teaching modalities, ranging from discussion-based programs such as seminars, drills and tabletop exercises to more complex operation-based programs such as functional exercises and full-scale exercises. Each method of instruction has its advantages and disadvantages. Tabletop exercises are facilitated discussions designed to evaluate programs, policies, and procedures; they are usually conducted in a classroom, often with tabletop props (e.g. models, maps or diagrams). ^ Objective. The overall goal of this project was to determine whether tabletop exercises are effective teaching modalities for disaster preparedness, with an emphasis on intentional chemical exposure. ^ Method. The target audience for the exercise was the Medical Reserve Brigade of the Texas State Guard, a group of volunteer healthcare providers and first responders who prepare for response to local disasters. A new tabletop exercise was designed to provide information on the complex, interrelated organizations within the national disaster preparedness program that this group would interact with in the event of a local disaster. This educational intervention consisted of a four hour multipart program that included a pretest of knowledge, lecture series, an interactive group discussion using a mock disaster scenario, a posttest of knowledge, and a course evaluation. ^ Results. Approximately 40 volunteers attended the intervention session; roughly half (n=21) had previously participated in a full scale drill. There was an 11% improvement in fund of knowledge between the pre- and post-test scores (p=0.002). Overall, the tabletop exercise was well received by those with and without prior training, with no significant differences found between these two groups in terms of relevance and appropriateness of content. However, the separate components of the tabletop exercise were variably effective, as gauged by written text comments on the questionnaire. ^ Conclusions. Tabletop exercises can be a useful training modality in disaster preparedness, as evidenced by improvement in knowledge and qualitative feedback on its value. Future offerings could incorporate recordings of participant responses during the drill, so that better feedback can be provided to them. Additional research should be conducted, using the same or similar design, in different populations that are stakeholders in disaster preparedness, so that the generalizability of these findings can be determined.^
Resumo:
Proton radiation therapy is gaining popularity because of the unique characteristics of its dose distribution, e.g., high dose-gradient at the distal end of the percentage-depth-dose curve (known as the Bragg peak). The high dose-gradient offers the possibility of delivering high dose to the target while still sparing critical organs distal to the target. However, the high dose-gradient is a double-edged sword: a small shift of the highly conformal high-dose area can cause the target to be substantially under-dosed or the critical organs to be substantially over-dosed. Because of that, large margins are required in treatment planning to ensure adequate dose coverage of the target, which prevents us from realizing the full potential of proton beams. Therefore, it is critical to reduce uncertainties in the proton radiation therapy. One major uncertainty in a proton treatment is the range uncertainty related to the estimation of proton stopping power ratio (SPR) distribution inside a patient. The SPR distribution inside a patient is required to account for tissue heterogeneities when calculating dose distribution inside the patient. In current clinical practice, the SPR distribution inside a patient is estimated from the patient’s treatment planning computed tomography (CT) images based on the CT number-to-SPR calibration curve. The SPR derived from a single CT number carries large uncertainties in the presence of human tissue composition variations, which is the major drawback of the current SPR estimation method. We propose to solve this problem by using dual energy CT (DECT) and hypothesize that the range uncertainty can be reduced by a factor of two from currently used value of 3.5%. A MATLAB program was developed to calculate the electron density ratio (EDR) and effective atomic number (EAN) from two CT measurements of the same object. An empirical relationship was discovered between mean excitation energies and EANs existing in human body tissues. With the MATLAB program and the empirical relationship, a DECT-based method was successfully developed to derive SPRs for human body tissues (the DECT method). The DECT method is more robust against the uncertainties in human tissues compositions than the current single-CT-based method, because the DECT method incorporated both density and elemental composition information in the SPR estimation. Furthermore, we studied practical limitations of the DECT method. We found that the accuracy of the DECT method using conventional kV-kV x-ray pair is susceptible to CT number variations, which compromises the theoretical advantage of the DECT method. Our solution to this problem is to use a different x-ray pair for the DECT. The accuracy of the DECT method using different combinations of x-ray energies, i.e., the kV-kV, kV-MV and MV-MV pair, was compared using the measured imaging uncertainties for each case. The kV-MV DECT was found to be the most robust against CT number variations. In addition, we studied how uncertainties propagate through the DECT calculation, and found general principles of selecting x-ray pairs for the DECT method to minimize its sensitivity to CT number variations. The uncertainties in SPRs estimated using the kV-MV DECT were analyzed further and compared to those using the stoichiometric method. The uncertainties in SPR estimation can be divided into five categories according to their origins: the inherent uncertainty, the DECT modeling uncertainty, the CT imaging uncertainty, the uncertainty in the mean excitation energy, and SPR variation with proton energy. Additionally, human body tissues were divided into three tissue groups – low density (lung) tissues, soft tissues and bone tissues. The uncertainties were estimated separately because their uncertainties were different under each condition. An estimate of the composite range uncertainty (2s) was determined for three tumor sites – prostate, lung, and head-and-neck, by combining the uncertainty estimates of all three tissue groups, weighted by their proportions along typical beam path for each treatment site. In conclusion, the DECT method holds theoretical advantages in estimating SPRs for human tissues over the current single-CT-based method. Using existing imaging techniques, the kV-MV DECT approach was capable of reducing the range uncertainty from the currently used value of 3.5% to 1.9%-2.3%, but it is short to reach our original goal of reducing the range uncertainty by a factor of two. The dominant source of uncertainties in the kV-MV DECT was the uncertainties in CT imaging, especially in MV CT imaging. Further reduction in beam hardening effect, the impact of scatter, out-of-field object etc. would reduce the Hounsfeld Unit variations in CT imaging. The kV-MV DECT still has the potential to reduce the range uncertainty further.
Resumo:
Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^
Resumo:
Renal cell carcinoma (RCC) is the most common malignant tumor of the kidney. Characterization of RCC tumors indicates that the most frequent genetic event associated with the initiation of tumor formation involves a loss of heterozygosity or cytogenetic aberration on the short arm of human chromosome 3. A tumor suppressor locus Nonpapillary Renal Carcinoma-1 (NRC-1, OMIM ID 604442) has been previously mapped to a 5–7 cM region on chromosome 3p12 and shown to induce rapid tumor cell death in vivo, as demonstrated by functional complementation experiments. ^ To identify the gene that accounts for the tumor suppressor activities of NRC-1, fine-scale physical mapping was conducted with a novel real-time quantitative PCR based method developed in this study. As a result, NRC-1 was mapped within a 4.6-Mb region defined by two unique sequences within UniGene clusters Hs.41407 and Hs.371835 (78,545Kb–83,172Kb in the NCBI build 31 physical map). The involvement of a putative tumor suppressor gene Robo1/Dutt1 was excluded as a candidate for NRC-1. Furthermore, a transcript map containing eleven candidate genes was established for the 4.6-Mb region. Analyses of gene expression patterns with real-time quantitative RT-PCR assays showed that one of the eleven candidate genes in the interval (TSGc28) is down-regulated in 15 out of 20 tumor samples compared with matched normal samples. Three exons of this gene have been identified by RACE experiments, although additional exon(s) seem to exist. Further gene characterization and functional studies are required to confirm the gene as a true tumor suppressor gene. ^ To study the cellular functions of NRC-1, gene expression profiles of three tumor suppressive microcell hybrids, each containing a functional copy of NRC-1, were compared with those of the corresponding parental tumor cell lines using 16K oligonucleotide microarrays. Differentially expressed genes were identified. Analyses based on the Gene Ontology showed that introduction of NRC-1 into tumor cell lines activates genes in multiple cellular pathways, including cell cycle, signal transduction, cytokines and stress response. NRC-1 is likely to induce cell growth arrest indirectly through WEE1. ^
Resumo:
Research has shown that disease-specific health related quality of life (HRQoL) instruments are more responsive than generic instruments to particular disease conditions. However, only a few studies have used disease-specific instruments to measure HRQoL in hemophilia. The goal of this project was to develop a disease-specific utility instrument that measures patient preferences for various hemophilia health states. The visual analog scale (VAS), a ranking method, and the standard gamble (SG), a choice-based method incorporating risk, were used to measure patient preferences. Study participants (n = 128) were recruited from the UT/Gulf States Hemophilia and Thrombophilia Center and stratified by age: 0–18 years and 19+. ^ Test retest reliability was demonstrated for both VAS and SG instruments: overall within-subject correlation coefficients were 0.91 and 0.79, respectively. Results showed statistically significant differences in responses between pediatric and adult participants when using the SG (p = .045). However, no significant differences were shown between these groups when using the VAS (p = .636). When responses to VAS and SG instruments were compared, statistically significant differences in both pediatric (p < .0001) and adult (p < .0001) groups were observed. Data from this study also demonstrated that persons with hemophilia with varying severity of disease, as well as those who were HIV infected, were able to evaluate a range of health states for hemophilia. This has important implications for the study of quality of life in hemophilia and the development of disease-specific HRQoL instruments. ^ The utility measures obtained from this study can be applied in economic evaluations that analyze the cost/utility of alternative hemophilia treatments. Results derived from the SG indicate that age can influence patients' preferences regarding their state of health. This may have implications for considering treatment options based on the mean age of the population under consideration. Although both instruments independently demonstrated reliability and validity, results indicate that the two measures may not be interchangeable. ^
Resumo:
Hypertension (HT) is mediated by the interaction of many genetic and environmental factors. Previous genome-wide linkage analysis studies have found many loci that show linkage to HT or blood pressure (BP) regulation, but the results were generally inconsistent. Gene by environment interaction is among the reasons that potentially explain these inconsistencies between studies. Here we investigate influences of gene by smoking (GxS) interaction on HT and BP in European American (EA), African American (AA) and Mexican American (MA) families from the GENOA study. A variance component-based method was utilized to perform genome-wide linkage analysis of systolic blood pressure (SBP), diastolic blood pressure (DBP), and HT status, as well as bivariate analysis for SBP and DBP for smokers, non-smokers, and combined groups. The most significant results were found for SBP in MA. The strongest signal was for chromosome 17q24 (LOD = 4.2), increased to (LOD = 4.7) in bivariate analysis but there was no evidence of GxS interaction at this locus (p = 0.48). Two signals were identified only in one group: on chromosome 15q26.2 (LOD = 3.37) in non-smokers and chromosome 7q21.11 (LOD = 1.4) in smokers, both of which had strong evidence for GxS interaction (p = 0.00039 and 0.009 respectively). There were also two other signals, one on chromosome 20q12 (LOD = 2.45) in smokers, which became much higher in the combined sample (LOD = 3.53), and one on chromosome 6p22.2 (LOD = 2.06) in non-smokers. Neither peak had very strong evidence for GxS interaction (p = 0.08 and 0.06 respectively). A fine mapping association study was performed using 200 SNPs in 30 genes located under the linkage signals on chromosomes 15 and 17. Under the chromosome 15 peak, the association analysis identified 6 SNPs accounting for a 7 mmHg increase in SBP in MA non-smokers. For the chromosome 17 linkage peak, the association analysis identified 3 SNPs accounting for a 6 mmHg increase in SBP in MA. However, none of these SNPs was significant after correcting for multiple testing, and accounting for them in the linkage analysis produced very small reductions in the linkage signal. ^ The linkage analysis of BP traits considering the smoking status produced very interesting signals for SBP in the MA population. The fine mapping association analysis gave some insight into the contribution of some SNPs to two of the identified signals, but since these SNPs did not remain significant after multiple testing correction and did not explain the linkage peaks, more work is needed to confirm these exploratory results and identify the culprit variations under these linkage peaks. ^
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^