20 resultados para Computational methods
em Aston University Research Archive
Resumo:
Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such as TCM classifications, types of databases, and mining tools. Aspects of computational TCM diagnosis, namely inspection, auscultation, pulse analysis as well as TCM expert systems are reviewed in term of their benefits and drawbacks. Various approaches on exploring relationships among TCM components and finding genes/proteins relating to TCM symptom complex are also studied. This survey provides a summary on the advance of computational approaches for TCM and will be useful for future knowledge discovery in this area. © 2007 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Crotonaldehyde (2-butenal) adsorption over gold sub-nanometer particles, and the influence of co-adsorbed oxygen, has been systematically investigated by computational methods. Using density functional theory, the adsorption energetics of crotonaldehyde on bare and oxidised gold clusters (Au , d = 0.8 nm) were determined as a function of oxygen coverage and coordination geometry. At low oxygen coverage, sites are available for which crotonaldehyde adsorption is enhanced relative to bare Au clusters by 10 kJ mol. At higher oxygen coverage, crotonaldehyde is forced to adsorb in close proximity to oxygen weakening adsorption by up to 60 kJ mol relative to bare Au. Bonding geometries, density of states plots and Bader analysis, are used to elucidate crotonaldehyde bonding to gold nanoparticles in terms of partial electron transfer from Au to crotonaldehyde, and note that donation to gold from crotonaldehyde also becomes significant following metal oxidation. At high oxygen coverage we find that all molecular adsorption sites have a neighbouring, destabilising, oxygen adatom so that despite enhanced donation, crotonaldehyde adsorption is always weakened by steric interactions. For a larger cluster (Au, d = 1.1 nm) crotonaldehyde adsorption is destabilized in this way even at a low oxygen coverage. These findings provide a quantitative framework to underpin the experimentally observed influence of oxygen on the selective oxidation of crotyl alcohol to crotonaldehyde over gold and gold-palladium alloys. © 2014 the Partner Organisations.
Resumo:
In dimensional metrology, often the largest source of uncertainty of measurement is thermal variation. Dimensional measurements are currently scaled linearly, using ambient temperature measurements and coefficients of thermal expansion, to ideal metrology conditions at 20˚C. This scaling is particularly difficult to implement with confidence in large volumes as the temperature is unlikely to be uniform, resulting in thermal gradients. A number of well-established computational methods are used in the design phase of product development for the prediction of thermal and gravitational effects, which could be used to a greater extent in metrology. This paper outlines the theory of how physical measurements of dimension and temperature can be combined more comprehensively throughout the product lifecycle, from design through to the manufacturing phase. The Hybrid Metrology concept is also introduced: an approach to metrology, which promises to improve product and equipment integrity in future manufacturing environments. The Hybrid Metrology System combines various state of the art physical dimensional and temperature measurement techniques with established computational methods to better predict thermal and gravitational effects.
Resumo:
Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.
Resumo:
The papers resulting from the recent Biochemical Society Focused Meeting 'G-Protein-Coupled Receptors: from Structural Insights to Functional Mechanisms' held in Prato in September 2012 are introduced in the present overview. A number of future goals for GPCR (G-protein-coupled receptor) research are considered, including the need to develop biophysical and computational methods to explore the full range of GPCR conformations and their dynamics, the need to develop methods to take this into account for drug discovery and the importance of relating observations on isolated receptors or receptors expressed in model systems to receptor function in vivo. © 2013 Biochemical Society.
Resumo:
We consider a Cauchy problem for the Laplace equation in a bounded region containing a cut, where the region is formed by removing a sufficiently smooth arc (the cut) from a bounded simply connected domain D. The aim is to reconstruct the solution on the cut from the values of the solution and its normal derivative on the boundary of the domain D. We propose an alternating iterative method which involves solving direct mixed problems for the Laplace operator in the same region. These mixed problems have either a Dirichlet or a Neumann boundary condition imposed on the cut and are solved by a potential approach. Each of these mixed problems is reduced to a system of integral equations of the first kind with logarithmic and hypersingular kernels and at most a square root singularity in the densities at the endpoints of the cut. The full discretization of the direct problems is realized by a trigonometric quadrature method which has super-algebraic convergence. The numerical examples presented illustrate the feasibility of the proposed method.
Resumo:
The twin arginine translocation (TAT) system ferries folded proteins across the bacterial membrane. Proteins are directed into this system by the TAT signal peptide present at the amino terminus of the precursor protein, which contains the twin arginine residues that give the system its name. There are currently only two computational methods for the prediction of TAT translocated proteins from sequence. Both methods have limitations that make the creation of a new algorithm for TAT-translocated protein prediction desirable. We have developed TATPred, a new sequence-model method, based on a Nave-Bayesian network, for the prediction of TAT signal peptides. In this approach, a comprehensive range of models was tested to identify the most reliable and robust predictor. The best model comprised 12 residues: three residues prior to the twin arginines and the seven residues that follow them. We found a prediction sensitivity of 0.979 and a specificity of 0.942.
Resumo:
A survey of crystal structures containing hydantoin, dihydrouracil and uracil derivatives in the Cambridge Structural Database revealed four main types of hydrogen bond motifs when derivatives with extra substituents able to interfere with the main motif are excluded. All these molecules contain two hydrogen bond donors and two hydrogen bond acceptors in the sequence of NH, C = O, NH, and C=O groups within a 5-membered ring (hydantoin) and two 6-membered rings (dihydrouracil and uracil). In all cases, both ring NH groups act as donors in the main hydrogen bond motif but there is an excess of hydrogen bond acceptors (two C=O able to accept twice each) and so two possibilities are found: (i) each carbonyl O atom may accept one hydrogen bond or (ii) one carbonyl O atom may accept two hydrogen bonds while the other does not participate in the hydrogen bonding. We observed different preferences in the type and symmetry of the motifs adopted by the different derivatives, and a good agreement is found between motifs observed experimentally and those predicted using computational methods. We identified certain molecular factors such as chirality, substituent size and the possibility of C-H⋯O interactions as important factors influencing the motif observation. © 2012 The Royal Society of Chemistry and the Centre National de la Recherche Scientifique.
Resumo:
Two-stage data envelopment analysis (DEA) efficiency models identify the efficient frontier of a two-stage production process. In some two-stage processes, the inputs to the first stage are shared by the second stage, known as shared inputs. This paper proposes a new relational linear DEA model for dealing with measuring the efficiency score of two-stage processes with shared inputs under constant returns-to-scale assumption. Two case studies of banking industry and university operations are taken as two examples to illustrate the potential applications of the proposed approach.
Resumo:
Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/.
Resumo:
A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.
Resumo:
Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).
Resumo:
The rapid global loss of biodiversity has led to a proliferation of systematic conservation planning methods. In spite of their utility and mathematical sophistication, these methods only provide approximate solutions to real-world problems where there is uncertainty and temporal change. The consequences of errors in these solutions are seldom characterized or addressed. We propose a conceptual structure for exploring the consequences of input uncertainty and oversimpli?ed approximations to real-world processes for any conservation planning tool or strategy. We then present a computational framework based on this structure to quantitatively model species representation and persistence outcomes across a range of uncertainties. These include factors such as land costs, landscape structure, species composition and distribution, and temporal changes in habitat. We demonstrate the utility of the framework using several reserve selection methods including simple rules of thumb and more sophisticated tools such as Marxan and Zonation. We present new results showing how outcomes can be strongly affected by variation in problem characteristics that are seldom compared across multiple studies. These characteristics include number of species prioritized, distribution of species richness and rarity, and uncertainties in the amount and quality of habitat patches. We also demonstrate how the framework allows comparisons between conservation planning strategies and their response to error under a range of conditions. Using the approach presented here will improve conservation outcomes and resource allocation by making it easier to predict and quantify the consequences of many different uncertainties and assumptions simultaneously. Our results show that without more rigorously generalizable results, it is very dif?cult to predict the amount of error in any conservation plan. These results imply the need for standard practice to include evaluating the effects of multiple real-world complications on the behavior of any conservation planning method.
Computational mechanics reveals nanosecond time correlations in molecular dynamics of liquid systems
Resumo:
Statistical complexity, a measure introduced in computational mechanics has been applied to MD simulated liquid water and other molecular systems. It has been found that statistical complexity does not converge in these systems but grows logarithmically without a limit. The coefficient of the growth has been introduced as a new molecular parameter which is invariant for a given liquid system. Using this new parameter extremely long time correlations in the system undetectable by traditional methods are elucidated. The existence of hundreds of picosecond and even nanosecond long correlations in bulk water has been demonstrated. © 2008 Elsevier B.V. All rights reserved.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.