13 resultados para Railroad safety, Bayesian methods, Accident modification factor, Countermeasure selection

em DigitalCommons@The Texas Medical Center


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Complex diseases such as cancer result from multiple genetic changes and environmental exposures. Due to the rapid development of genotyping and sequencing technologies, we are now able to more accurately assess causal effects of many genetic and environmental factors. Genome-wide association studies have been able to localize many causal genetic variants predisposing to certain diseases. However, these studies only explain a small portion of variations in the heritability of diseases. More advanced statistical models are urgently needed to identify and characterize some additional genetic and environmental factors and their interactions, which will enable us to better understand the causes of complex diseases. In the past decade, thanks to the increasing computational capabilities and novel statistical developments, Bayesian methods have been widely applied in the genetics/genomics researches and demonstrating superiority over some regular approaches in certain research areas. Gene-environment and gene-gene interaction studies are among the areas where Bayesian methods may fully exert its functionalities and advantages. This dissertation focuses on developing new Bayesian statistical methods for data analysis with complex gene-environment and gene-gene interactions, as well as extending some existing methods for gene-environment interactions to other related areas. It includes three sections: (1) Deriving the Bayesian variable selection framework for the hierarchical gene-environment and gene-gene interactions; (2) Developing the Bayesian Natural and Orthogonal Interaction (NOIA) models for gene-environment interactions; and (3) extending the applications of two Bayesian statistical methods which were developed for gene-environment interaction studies, to other related types of studies such as adaptive borrowing historical data. We propose a Bayesian hierarchical mixture model framework that allows us to investigate the genetic and environmental effects, gene by gene interactions (epistasis) and gene by environment interactions in the same model. It is well known that, in many practical situations, there exists a natural hierarchical structure between the main effects and interactions in the linear model. Here we propose a model that incorporates this hierarchical structure into the Bayesian mixture model, such that the irrelevant interaction effects can be removed more efficiently, resulting in more robust, parsimonious and powerful models. We evaluate both of the 'strong hierarchical' and 'weak hierarchical' models, which specify that both or one of the main effects between interacting factors must be present for the interactions to be included in the model. The extensive simulation results show that the proposed strong and weak hierarchical mixture models control the proportion of false positive discoveries and yield a powerful approach to identify the predisposing main effects and interactions in the studies with complex gene-environment and gene-gene interactions. We also compare these two models with the 'independent' model that does not impose this hierarchical constraint and observe their superior performances in most of the considered situations. The proposed models are implemented in the real data analysis of gene and environment interactions in the cases of lung cancer and cutaneous melanoma case-control studies. The Bayesian statistical models enjoy the properties of being allowed to incorporate useful prior information in the modeling process. Moreover, the Bayesian mixture model outperforms the multivariate logistic model in terms of the performances on the parameter estimation and variable selection in most cases. Our proposed models hold the hierarchical constraints, that further improve the Bayesian mixture model by reducing the proportion of false positive findings among the identified interactions and successfully identifying the reported associations. This is practically appealing for the study of investigating the causal factors from a moderate number of candidate genetic and environmental factors along with a relatively large number of interactions. The natural and orthogonal interaction (NOIA) models of genetic effects have previously been developed to provide an analysis framework, by which the estimates of effects for a quantitative trait are statistically orthogonal regardless of the existence of Hardy-Weinberg Equilibrium (HWE) within loci. Ma et al. (2012) recently developed a NOIA model for the gene-environment interaction studies and have shown the advantages of using the model for detecting the true main effects and interactions, compared with the usual functional model. In this project, we propose a novel Bayesian statistical model that combines the Bayesian hierarchical mixture model with the NOIA statistical model and the usual functional model. The proposed Bayesian NOIA model demonstrates more power at detecting the non-null effects with higher marginal posterior probabilities. Also, we review two Bayesian statistical models (Bayesian empirical shrinkage-type estimator and Bayesian model averaging), which were developed for the gene-environment interaction studies. Inspired by these Bayesian models, we develop two novel statistical methods that are able to handle the related problems such as borrowing data from historical studies. The proposed methods are analogous to the methods for the gene-environment interactions on behalf of the success on balancing the statistical efficiency and bias in a unified model. By extensive simulation studies, we compare the operating characteristics of the proposed models with the existing models including the hierarchical meta-analysis model. The results show that the proposed approaches adaptively borrow the historical data in a data-driven way. These novel models may have a broad range of statistical applications in both of genetic/genomic and clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction. Patient safety culture is the integration of interrelated practices that once developed is supported by both the culture and leadership of the organization (Sagan, 1993). The purpose of this study is to describe and examine the relationship between surgical residents’ perception of their leadership and the resulting organizational safety culture within their clinical setting. This assessment is important to understanding the extent that leadership style affects the perception of the safety culture.^ Methods. A secondary dataset was used which included data from 68 surgical residents from two survey instruments, Organizational Description Questionnaire (ODQ) and Patient Safety Climate In Healthcare Organizations (PSCHO) Survey. Multiple regressions followed by hierarchical regressions with the introduction of the Post Graduate Year (PGY) variable examined the association between the leadership styles, Transactional and Transformational and the organizational safety culture variables, Overall Emphasis on Safety, Senior management engagement, Organizational resources for safety. Independent t-tests were conducted to assess whether males and females differ among the organizational safety culture variables and either leadership style.^ Results. The surgical residents perceived their organizational leadership to have greater emphasis placed on transformational leadership culture style relative to transactional leadership culture style. The only significant association found was between Transformational leadership and Organizational resources for safety. PGY had no significant effect on the leadership or the safety culture perceived. No significant difference was found between females and males in regards to the safety culture or the leadership style.^ Discussion. These results have implications as they support the premise for the study which is surgical residents perceive their existing leadership and organizational culture to be more transformational in nature than transactional. Significance was found between the leadership perceived and one of the safety culture variables, Organizational resources for safety. The foundation for this association lies in the fact that surgical residents are the personnel which are a part of the organizational resources. Although PGY differentiation did not seem to play a difference in the leadership perceived this could be attributed to the small sample size. No gender difference were found which supports the assumption that within such a highly specialized group such as surgical residents there is no gender differences since the highly specialized field draws a certain type of person with distinct characteristics. In future research these survey tools can be used to gauge the survey audiences’ perception and safety interventions can be developed based on the results. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Bayesian approach to estimating the intraclass correlation coefficient was used for this research project. The background of the intraclass correlation coefficient, a summary of its standard estimators, and a review of basic Bayesian terminology and methodology were presented. The conditional posterior density of the intraclass correlation coefficient was then derived and estimation procedures related to this derivation were shown in detail. Three examples of applications of the conditional posterior density to specific data sets were also included. Two sets of simulation experiments were performed to compare the mean and mode of the conditional posterior density of the intraclass correlation coefficient to more traditional estimators. Non-Bayesian methods of estimation used were: the methods of analysis of variance and maximum likelihood for balanced data; and the methods of MIVQUE (Minimum Variance Quadratic Unbiased Estimation) and maximum likelihood for unbalanced data. The overall conclusion of this research project was that Bayesian estimates of the intraclass correlation coefficient can be appropriate, useful and practical alternatives to traditional methods of estimation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: For most cytotoxic and biologic anti-cancer agents, the response rate of the drug is commonly assumed to be non-decreasing with an increasing dose. However, an increasing dose does not always result in an appreciable increase in the response rate. This may especially be true at high doses for a biologic agent. Therefore, in a phase II trial the investigators may be interested in testing the anti-tumor activity of a drug at more than one (often two) doses, instead of only at the maximum tolerated dose (MTD). This way, when the lower dose appears equally effective, this dose can be recommended for further confirmatory testing in a phase III trial under potential long-term toxicity and cost considerations. A common approach to designing such a phase II trial has been to use an independent (e.g., Simon's two-stage) design at each dose ignoring the prior knowledge about the ordering of the response probabilities at the different doses. However, failure to account for this ordering constraint in estimating the response probabilities may result in an inefficient design. In this dissertation, we developed extensions of Simon's optimal and minimax two-stage designs, including both frequentist and Bayesian methods, for two doses that assume ordered response rates between doses. ^ Methods: Optimal and minimax two-stage designs are proposed for phase II clinical trials in settings where the true response rates at two dose levels are ordered. We borrow strength between doses using isotonic regression and control the joint and/or marginal error probabilities. Bayesian two-stage designs are also proposed under a stochastic ordering constraint. ^ Results: Compared to Simon's designs, when controlling the power and type I error at the same levels, the proposed frequentist and Bayesian designs reduce the maximum and expected sample sizes. Most of the proposed designs also increase the probability of early termination when the true response rates are poor. ^ Conclusion: Proposed frequentist and Bayesian designs are superior to Simon's designs in terms of operating characteristics (expected sample size and probability of early termination, when the response rates are poor) Thus, the proposed designs lead to more cost-efficient and ethical trials, and may consequently improve and expedite the drug discovery process. The proposed designs may be extended to designs of multiple group trials and drug combination trials.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cytochrome P450 enzyme catalysis requires two electrons transferred from NADPH-cytochrome P450 reductase (reductase) to P450. Electrostatic charge-pairing has been proposed to be one of the major forces in the interaction between P450 and reductase. In order to obtain further insight into the molecular basis for the protein interaction, I used two methods, chemical modification and specific anti-peptide antibodies, to study the involvement and importance of charged amino acid residues. Acetylation of lysine residues of P450c and P450b by acetic anhydride dramatically inhibited the reductase-supported P450c-dependent ethoxycoumarin hydroxylation activity, but P450 activity supported by cumene hydroperoxide is relatively unchanged. The modification of lysine residues of P450c and P450b did not grossly disturb the protein conformation as revealed by several spectral studies. This differential effect of lysine modification on the P450 activity in the system reconstituted with reductase versus the system supported by cumene hydroperoxide suggested an important role for P450 lysine residues in the interaction with reductase. Using $\rm\sp{14}C$-acetic anhydride, P450 lysine residues were labelled and further identified on P450c and P450b. Those lysine residues are at position 97, 271, 279, and 407 for P450c, and 251, 384, 422, 433, and 473 for P450b. Alignment of those identified lysine residues on P450c and P450b with amino acid residues identified in other studies indicated those residues reside in three major sequence areas. Modification of arginine residues of P450b by phenylglyoxal and 2, 3-butanedione have no significant effect on P450 activity either supported by NADPH and reductase or supported by cumene hydroperoxide. Further studies using $\rm\sp{14}C$-phenylglyoxal reveals that no incorporation of phenylglyoxal into P450b was found. These results demonstrated a predominant role of lysine residues of P450 in the electrostatic interaction with reductase. To understand the protein binding sites on each of P450 and reductase, I generated three anti-peptide antibodies against regions on reductase and five anti-peptide antibodies against five putative reductase binding sites on P450c. These anti-peptide antibodies were affinity purified and characterized on ELISA and by Western blot analysis. Inhibition experiments using these antibodies demonstrated that regions 109-120 and 204-220 of reductase are probably the two major binding sites for P450. The association of reductase with cytochromes P450 and cytochrome c may rely on different mechanisms. The data from experiments using anti-peptide (P450c) antibodies supports the important role of P450c lysine residues 271/279 and 458/460 in the interaction with reductase. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The motion of lung tumors during respiration makes the accurate delivery of radiation therapy to the thorax difficult because it increases the uncertainty of target position. The adoption of four-dimensional computed tomography (4D-CT) has allowed us to determine how a tumor moves with respiration for each individual patient. Using information acquired during a 4D-CT scan, we can define the target, visualize motion, and calculate dose during the planning phase of the radiotherapy process. One image data set that can be created from the 4D-CT acquisition is the maximum-intensity projection (MIP). The MIP can be used as a starting point to define the volume that encompasses the motion envelope of the moving gross target volume (GTV). Because of the close relationship that exists between the MIP and the final target volume, we investigated four MIP data sets created with different methodologies (3 using various 4D-CT sorting implementations, and one using all available cine CT images) to compare target delineation. It has been observed that changing the 4D-CT sorting method will lead to the selection of a different collection of images; however, the clinical implications of changing the constituent images on the resultant MIP data set are not clear. There has not been a comprehensive study that compares target delineation based on different 4D-CT sorting methodologies in a patient population. We selected a collection of patients who had previously undergone thoracic 4D-CT scans at our institution, and who had lung tumors that moved at least 1 cm. We then generated the four MIP data sets and automatically contoured the target volumes. In doing so, we identified cases in which the MIP generated from a 4D-CT sorting process under-represented the motion envelope of the target volume by more than 10% than when measured on the MIP generated from all of the cine CT images. The 4D-CT methods suffered from duplicate image selection and might not choose maximum extent images. Based on our results, we suggest utilization of a MIP generated from the full cine CT data set to ensure a representative inclusive tumor extent, and to avoid geometric miss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new technique for the detection of microbiological fecal pollution in drinking and in raw surface water has been modified and tested against the standard multiple-tube fermentation technique (most-probable-number, MPN). The performance of the new test in detecting fecal pollution in drinking water has been tested at different incubation temperatures. The basis for the new test was the detection of hydrogen sulfide produced by the hydrogen sulfide producing bacteria which are usually associated with the coliform group. The positive results are indicated by the appearance of a brown to black color in the contents of the fermentation tube within 18 to 24 hours of incubation at 35 (+OR-) .5(DEGREES)C. For this study 158 water samples of different sources have been used. The results were analyzed statistically with the paired t-test and the one-way analysis of variance. No statistically significant difference was noticed between the two methods, when tested 35 (+OR-) .5(DEGREES)C, in detecting fecal pollution in drinking water. The new test showed more positive results with raw surface water, which could be due to the presence of hydrogen sulfide producing bacteria of non-fecal origin like Desulfovibrio and Desulfomaculum. The survival of the hydrogen sulfide producing bacteria and the coliforms was also tested over a 7-day period, and the results showed no significant difference. The two methods showed no significant difference when used to detect fecal pollution at a very low coliform density. The results showed that the new test is mostly effective, in detecting fecal pollution in drinking water, when used at 35 (+OR-) .5(DEGREES)C. The new test is effective, simple, and less expensive when used to detect fecal pollution in drinking water and raw surface water at 35 (+OR-) .5(DEGREES)C. The method can be used for qualitative and/or quantitative analysis of water in the field and in the laboratory. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Standard methods for testing safety data are needed to ensure the safe conduct of clinical trials. In particular, objective rules for reliably identifying unsafe treatments need to be put into place to help protect patients from unnecessary harm. DMCs are uniquely qualified to evaluate accumulating unblinded data and make recommendations about the continuing safe conduct of a trial. However, it is the trial leadership who must make the tough ethical decision about stopping a trial, and they could benefit from objective statistical rules that help them judge the strength of evidence contained in the blinded data. We design early stopping rules for harm that act as continuous safety screens for randomized controlled clinical trials with blinded treatment information, which could be used by anyone, including trial investigators (and trial leadership). A Bayesian framework, with emphasis on the likelihood function, is used to allow for continuous monitoring without adjusting for multiple comparisons. Close collaboration between the statistician and the clinical investigators will be needed in order to design safety screens with good operating characteristics. Though the math underlying this procedure may be computationally intensive, implementation of the statistical rules will be easy and the continuous screening provided will give suitably early warning when real problems were to emerge. Trial investigators and trial leadership need these safety screens to help them to effectively monitor the ongoing safe conduct of clinical trials with blinded data.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Early phase clinical trial designs have long been the focus of interest for clinicians and statisticians working in oncology field. There are several standard phse I and phase II designs that have been widely-implemented in medical practice. For phase I design, the most commonly used methods are 3+3 and CRM. A newly-developed Bayesian model-based mTPI design has now been used by an increasing number of hospitals and pharmaceutical companies. The advantages and disadvantages of these three top phase I designs have been discussed in my work here and their performances were compared using simulated data. It was shown that mTPI design exhibited superior performance in most scenarios in comparison with 3+3 and CRM designs. ^ The next major part of my work is proposing an innovative seamless phase I/II design that allows clinicians to conduct phase I and phase II clinical trials simultaneously. Bayesian framework was implemented throughout the whole design. The phase I portion of the design adopts mTPI method, with the addition of futility rule which monitors the efficacy performance of the tested drugs. Dose graduation rules were proposed in this design to allow doses move forward from phase I portion of the study to phase II portion without interrupting the ongoing phase I dose-finding schema. Once a dose graduated to phase II, adaptive randomization was used to randomly allocated patients into different treatment arms, with the intention of more patients being assigned to receive more promising dose(s). Again simulations were performed to compare the performance of this innovative phase I/II design with a recently published phase I/II design, together with the conventional phase I and phase II designs. The simulation results indicated that the seamless phase I/II design outperform the other two competing methods in most scenarios, with superior trial power and the fact that it requires smaller sample size. It also significantly reduces the overall study time. ^ Similar to other early phase clinical trial designs, the proposed seamless phase I/II design requires that the efficacy and safety outcomes being able to be observed in a short time frame. This limitation can be overcome by using validated surrogate marker for the efficacy and safety endpoints.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conservative procedures in low-dose risk assessment are used to set safety standards for known or suspected carcinogens. However, the assumptions upon which the methods are based and the effects of these methods are not well understood.^ To minimize the number of false-negatives and to reduce the cost of bioassays, animals are given very high doses of potential carcinogens. Results must then be extrapolated to much smaller doses to set safety standards for risks such as one per million. There are a number of competing methods that add a conservative safety factor into these calculations.^ A method of quantifying the conservatism of these methods was described and tested on eight procedures used in setting low-dose safety standards. The results using these procedures were compared by computer simulation and by the use of data from a large scale animal study.^ The method consisted of determining a "true safe dose" (tsd) according to an assumed underlying model. If one assumed that Y = the probability of cancer = P(d), a known mathematical function of the dose, then by setting Y to some predetermined acceptable risk, one can solve for d, the model's "true safe dose".^ Simulations were generated, assuming a binomial distribution, for an artificial bioassay. The eight procedures were then used to determine a "virtual safe dose" (vsd) that estimates the tsd, assuming a risk of one per million. A ratio R = ((tsd-vsd)/vsd) was calculated for each "experiment" (simulation). The mean R of 500 simulations and the probability R $<$ 0 was used to measure the over and under conservatism of each procedure.^ The eight procedures included Weil's method, Hoel's method, the Mantel-Byran method, the improved Mantel-Byran, Gross's method, fitting a one-hit model, Crump's procedure, and applying Rai and Van Ryzin's method to a Weibull model.^ None of the procedures performed uniformly well for all types of dose-response curves. When the data were linear, the one-hit model, Hoel's method, or the Gross-Mantel method worked reasonably well. However, when the data were non-linear, these same methods were overly conservative. Crump's procedure and the Weibull model performed better in these situations. ^