7 resultados para chemical reaction models
em DigitalCommons@The Texas Medical Center
Resumo:
In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.
Resumo:
Introduction Gene expression is an important process whereby the genotype controls an individual cell’s phenotype. However, even genetically identical cells display a variety of phenotypes, which may be attributed to differences in their environment. Yet, even after controlling for these two factors, individual phenotypes still diverge due to noisy gene expression. Synthetic gene expression systems allow investigators to isolate, control, and measure the effects of noise on cell phenotypes. I used mathematical and computational methods to design, study, and predict the behavior of synthetic gene expression systems in S. cerevisiae, which were affected by noise. Methods I created probabilistic biochemical reaction models from known behaviors of the tetR and rtTA genes, gene products, and their gene architectures. I then simplified these models to account for essential behaviors of gene expression systems. Finally, I used these models to predict behaviors of modified gene expression systems, which were experimentally verified. Results Cell growth, which is often ignored when formulating chemical kinetics models, was essential for understanding gene expression behavior. Models incorporating growth effects were used to explain unexpected reductions in gene expression noise, design a set of gene expression systems with “linear” dose-responses, and quantify the speed with which cells explored their fitness landscapes due to noisy gene expression. Conclusions Models incorporating noisy gene expression and cell division were necessary to design, understand, and predict the behaviors of synthetic gene expression systems. The methods and models developed here will allow investigators to more efficiently design new gene expression systems, and infer gene expression properties of TetR based systems.
Resumo:
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Resumo:
The primary objective of this study has been to investigate the effects at the molecular level of trisomy of mouse chromosome 7 in chemically induced skin tumors. It was previously proposed that the initiation event in the mouse skin carcinogenesis model is a heterozygous mutation of the Ha-ras-1 gene, mapped to chromosome 7. Previous studies in this laboratory identified trisomy 7 as one of the primary nonrandom cytogenetic abnormalities found in the majority of severely dysplastic papillomas and squamous cell carcinomas induced in SENCAR mice by an initiation-promotion protocol. Therefore, the first hypothesis tested was that trisomy 7 occurs by specific duplication of the chromosome carrying a mutated Ha-ras-1 allele. Results of a quantitative analysis of normal/mutated allelic ratios of the Ha-ras-1 gene confirmed this hypothesis, showing that most of the tumors exhibited overrepresentation of the mutated allele in the form of 1/2, 0/3, and 0/2 (normal/mutated) ratios. In addition, histopathological analysis of the tumors showed an apparent association between the degree of malignancy and the dosage of the mutated Ha-ras-1 allele. To determine the mechanism for loss of the normal Ha-ras-1 allele, found in 30% of the tumors, a comparison of constitutional and tumor genotypes was performed at different informative loci of chromosome 7. By combining Southern blot and polymerase chain reaction fragment length polymorphism analyses of DNAs extracted from squamous cell carcinomas, complete loss of heterozygosity was detected in 15 of 20 tumors at the Hbb locus, and in 5 of 5 tumors at the int-2 locus, both distal to Ha-ras-1. In addition, polymerase chain reaction analysis of DNA extracted from papillomas indicated that loss of heterozygosity occurs in late-stage lesions exhibiting a high degree of dysplasia and areas of microinvasion, suggesting that this event may be associated to the acquisition of the malignant phenotype. Allelic dosage analysis of tumors that had become homozygous at Hbb but retained heterozygosis at Ha-ras-1, indicated that loss of heterozygosity on mouse chromosome 7 occurs by a mitotic recombination mechanism. Overall, these findings suggest the presence of a putative tumor suppressor locus on the 7F1-ter region of mouse chromosome 7. Thus, loss of function by homozygosis at this putative suppressor locus may complement activation of the Ha-ras-1 gene during tumor progression, and might be associated with the malignant conversion stage of mouse skin carcinogenesis. ^
Resumo:
Antibodies (Abs) to autoantigens and foreign antigens (Ags) mediate, respectively, various pathogenic and beneficial effects. Abs express enzyme-like nucleophiles that react covalently with electrophiles. A subpopulation of nucleophilic Abs expresses proteolytic activity, which can inactivate the Ag permanently. This thesis shows how the nucleophilicity can be exploited to inhibit harmful Abs or potentially protect against a virus. ^ Inactivation of pathogenic Abs from Hemophilia A (HA) patients by means of nucleophile-electrophile pairing was studied. Deficient factor VIII (FVIII) in HA subjects impairs blood coagulation. FVIII replacement therapy fails in 20-30% of HA patients due to production of anti-FVIII Abs. FVIII analogs containing electrophilic phosphonate group (E-FVIII and E-C2) were hypothesized to inactivate the Abs by reacting specifically and covalently with nucleophilic sites. Anti-FVIII IgGs from HA patients formed immune complexes with E-FVIII and E-C2 that remained irreversibly associated under conditions that disrupt noncovalent Ab-Ag complexes. The reaction induced irreversible loss of Ab anti-coagulant activity. E-FVIII alone displayed limited interference with coagulation. E-FVIII is a prototype reagent suitable for further development as a selective inactivator of pathogenic anti-FVIII Abs. ^ The beneficial function of Abs to human immunodeficiency virus type 1 (HIV-1) was analyzed. HIV-1 eludes the immune system by rapidly changing its coat protein structure. IgAs from noninfected subjects hydrolyzed gp120 and neutralized HIV-1 with modest potency by recognizing the gp120 421-433 epitope, a conserved B cell superantigenic region that is also essential for HIV-1 attachment to host cell CD4 receptors. An adaptive immune response to superantigens is generally prohibited due to their ability to downregulate B cells. IgAs from subjects with prolonged HIV-1 infection displayed improved catalytic hydrolysis of gp120 and exceptionally potent and broad neutralization of diverse CCR5-dependent primary HIV isolates attributable to recognition of the 421-433 epitope. This indicates that slow immunological bypass of the superantigenic character of gp120 is possible, opening the path to effective HIV vaccination. ^ My research reveals a novel route to inactivate pathogenic nucleophilic Abs using electrophilic antigens. Conversely, naturally occurring nucleophilic Abs may help impede HIV infection, and the Abs could be developed for passive immunotherapy of HIV infected subjects. ^
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^