27 resultados para Computational analysis
em University of Queensland eSpace - Australia
Resumo:
Protein kinases exhibit various degrees of substrate specificity. The large number of different protein kinases in the eukaryotic proteomes makes it impractical to determine the specificity of each enzyme experimentally. To test if it were possible to discriminate potential substrates from non-substrates by simple computational techniques, we analysed the binding enthalpies of modelled enzyme-substrate complexes and attempted to correlate it with experimental enzyme kinetics measurements. The crystal structures of phosphorylase kinase and cAMP-dependent protein kinase were used to generate models of the enzyme with a series of known peptide substrates and non-substrates, and the approximate enthalpy of binding assessed following energy minimization. We show that the computed enthalpies do not correlate closely with kinetic measurements, but the method can distinguish good substrates from weak substrates and non-substrates. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
The base composition pattern (BCP) in the putative promoter region (PPRs) up to 5 Kb lengths of 682 human genes on Chromosome 22 (Chr22) was examined. Two-dimensional (2D) and three-dimensional (3D) functions were designed to delineate the DNA base composition, with four major patterns identified. It is found that 17.6% genes include TATA box, 28.0% GC box, 18.9% CAAT box and 38.4% CpG islands, and approximately 10% genes have one of four putative initiator (Inr) motifs. The occurrence of the promoter elements is tightly associated with the base composition features in the promoter regions, and the associations of the base composition features with occurrence of the promoter elements in the promoter regions mediate tissue-wide expression of the genes in human. The occurrence of two or more promoter elements in the promoter regions is required for the medium- and wide-range expression profiles of the human genes on Chr22. Thus, the reported data shed light on the characteristics of the PPRs of the human genes on Chr22, which may improve our understanding of regulatory roles of the PPRs with occurrence of the promoter elements in gene expression.
Resumo:
We present evidence of complex balancing regulation of HTR1B transcription by common polymorphisms in its promoter. Computational analysis of the HTR1B gene predicted that a 50 segment, spanning common DNA sequence variations, T-261G, A-161T, and -182INS/DEL-181, contained a putative functional promoter. Using a secreted alkaline phosphatase (SEAP) reporter gene system, we found that the haplotype -261G_-182INS-181_A-161 enhanced transcriptional activity 2.3-fold compared with the haplotype T-261_-182INS-181_A-161. Conversely, -161T reversed this, and the net effect when -261G and -161T were in the same haplotype (-261G_-182INS-181_-161T) was equivalent to the major haplotype (T-261_-182INS-181_A-161). Electrophoretic mobility shift experiments showed that -261G and -161T modify the binding of transcription factors (TFs): -261G generates a new AP2 binding site, while alleles A-161 and -161T exhibit different binding characteristics to AP1. T-261G and A-161T were found to be in linkage disequilibrium (LD) with G861C in a European ancestry population. Interestingly, G861C has been reported to be associated with several psychiatric disorders. Our results indicate that HTR1B is the target of substantial transcriptional genetic regulation by common haplotypes, which are in LD with the HTR1B single-nucleotide polymorphism (SNP) most commonly used in association studies.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
Formaldehyde-derived oxazolidine derivatives 4-7 of the beta-adrenoreceptor antagonists metoprolol 1, atenolol 2 and timolol 3 have been synthesised. Conformational analysis of 1-3 and the oxazolidine derivatives 4-7 has been performed using H-1 NMR spectroscopy and computational methods. The H-1 NMR studies show that for the aryloxypropanolamine beta-adrenoreceptor antagonists there is a predominance of the conformer in which the amine group is approximately antiperiplanar or trans to the aryloxymethylene group. Both H-1 NMR data and theoretical studies indicate that the oxazolidine derivatives 4-7 and the aryloxypropanolamine beta-adrenoreceptor antagonists 1-3 adopt similar conformations around the beta-amino alcohol moiety. Thus, oxazolidine ring formation does not dramatically alter the preferred conformation adopted by the beta-amino alcohol moiety of 1-3. Oxazolidine derivatives of aryloxypropanolamine beta-adrenoreceptor antagonists may therefore be appropriate as prodrugs, or semi-rigid analogues, when greater lipophilicity is required for drug delivery.
Resumo:
We use theoretical and numerical methods to investigate the general pore-fluid flow patterns near geological lenses in hydrodynamic and hydrothermal systems respectively. Analytical solutions have been rigorously derived for the pore-fluid velocity, stream function and excess pore-fluid pressure near a circular lens in a hydrodynamic system. These analytical solutions provide not only a better understanding of the physics behind the problem, but also a valuable benchmark solution for validating any numerical method. Since a geological lens is surrounded by a medium of large extent in nature and the finite element method is efficient at modelling only media of finite size, the determination of the size of the computational domain of a finite element model, which is often overlooked by numerical analysts, is very important in order to ensure both the efficiency of the method and the accuracy of the numerical solution obtained. To highlight this issue, we use the derived analytical solutions to deduce a rigorous mathematical formula for designing the computational domain size of a finite element model. The proposed mathematical formula has indicated that, no matter how fine the mesh or how high the order of elements, the desired accuracy of a finite element solution for pore-fluid flow near a geological lens cannot be achieved unless the size of the finite element model is determined appropriately. Once the finite element computational model has been appropriately designed and validated in a hydrodynamic system, it is used to examine general pore-fluid flow patterns near geological lenses in hydrothermal systems. Some interesting conclusions on the behaviour of geological lenses in hydrodynamic and hydrothermal systems have been reached through the analytical and numerical analyses carried out in this paper.
Resumo:
The explosive growth in biotechnology combined with major advancesin information technology has the potential to radically transformimmunology in the postgenomics era. Not only do we now have readyaccess to vast quantities of existing data, but new data with relevanceto immunology are being accumulated at an exponential rate. Resourcesfor computational immunology include biological databases and methodsfor data extraction, comparison, analysis and interpretation. Publiclyaccessible biological databases of relevance to immunologists numberin the hundreds and are growing daily. The ability to efficientlyextract and analyse information from these databases is vital forefficient immunology research. Most importantly, a new generationof computational immunology tools enables modelling of peptide transportby the transporter associated with antigen processing (TAP), modellingof antibody binding sites, identification of allergenic motifs andmodelling of T-cell receptor serial triggering.
Resumo:
Allergy is a major cause of morbidity worldwide. The number of characterized allergens and related information is increasing rapidly creating demands for advanced information storage, retrieval and analysis. Bioinformatics provides useful tools for analysing allergens and these are complementary to traditional laboratory techniques for the study of allergens. Specific applications include structural analysis of allergens, identification of B- and T-cell epitopes, assessment of allergenicity and cross-reactivity, and genome analysis. In this paper, the most important bioinformatic tools and methods with relevance to the study of allergy have been reviewed.
Resumo:
In this paper a methodology for integrated multivariate monitoring and control of biological wastewater treatment plants during extreme events is presented. To monitor the process, on-line dynamic principal component analysis (PCA) is performed on the process data to extract the principal components that represent the underlying mechanisms of the process. Fuzzy c-means (FCM) clustering is used to classify the operational state. Performing clustering on scores from PCA solves computational problems as well as increases robustness due to noise attenuation. The class-membership information from FCM is used to derive adequate control set points for the local control loops. The methodology is illustrated by a simulation study of a biological wastewater treatment plant, on which disturbances of various types are imposed. The results show that the methodology can be used to determine and co-ordinate control actions in order to shift the control objective and improve the effluent quality.
Resumo:
We compare the performance of two different low-storage filter diagonalisation (LSFD) strategies in the calculation of complex resonance energies of the HO2, radical. The first is carried out within a complex-symmetric Lanczos subspace representation [H. Zhang, S.C. Smith, Phys. Chem. Chem. Phys. 3 (2001) 2281]. The second involves harmonic inversion of a real autocorrelation function obtained via a damped Chebychev recursion [V.A. Mandelshtam, H.S. Taylor, J. Chem. Phys. 107 (1997) 6756]. We find that while the Chebychev approach has the advantage of utilizing real algebra in the time-consuming process of generating the vector recursion, the Lanczos, method (using complex vectors) requires fewer iterations, especially for low-energy part of the spectrum. The overall efficiency in calculating resonances for these two methods is comparable for this challenging system. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Observations of an insect's movement lead to theory on the insect's flight behaviour and the role of movement in the species' population dynamics. This theory leads to predictions of the way the population changes in time under different conditions. If a hypothesis on movement predicts a specific change in the population, then the hypothesis can be tested against observations of population change. Routine pest monitoring of agricultural crops provides a convenient source of data for studying movement into a region and among fields within a region. Examples of the use of statistical and computational methods for testing hypotheses with such data are presented. The types of questions that can be addressed with these methods and the limitations of pest monitoring data when used for this purpose are discussed. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
Minimum/maximum autocorrelation factor (MAF) is a suitable algorithm for orthogonalization of a vector random field. Orthogonalization avoids the use of multivariate geostatistics during joint stochastic modeling of geological attributes. This manuscript demonstrates in a practical way that computation of MAF is the same as discriminant analysis of the nested structures. Mathematica software is used to illustrate MAF calculations from a linear model of coregionalization (LMC) model. The limitation of two nested structures in the LMC for MAF is also discussed and linked to the effects of anisotropy and support. The analysis elucidates the matrix properties behind the approach and clarifies relationships that may be useful for model-based approaches. (C) 2003 Elsevier Science Ltd. All rights reserved.