46 resultados para Computer Science, Interdisciplinary Applications
Resumo:
A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Scorpion toxins are common experimental tools for studies of biochemical and pharmacological properties of ion channels. The number of functionally annotated scorpion toxins is steadily growing, but the number of identified toxin sequences is increasing at much faster pace. With an estimated 100,000 different variants, bioinformatic analysis of scorpion toxins is becoming a necessary tool for their systematic functional analysis. Here, we report a bioinformatics-driven system involving scorpion toxin structural classification, functional annotation, database technology, sequence comparison, nearest neighbour analysis, and decision rules which produces highly accurate predictions of scorpion toxin functional properties. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
The new Australian Computational Earth Systems Simulator research facility provides a virtual laboratory for studying the solid earth and its complex system behavior. The facility's capabilities complement those developed by overseas groups, thereby creating the infrastructure for an international computational solid earth research virtual observatory.
Resumo:
We introduce a unified Gaussian quantum operator representation for fermions and bosons. The representation extends existing phase-space methods to Fermi systems as well as the important case of Fermi-Bose mixtures. It enables simulations of the dynamics and thermal equilibrium states of many-body quantum systems from first principles. As an example, we numerically calculate finite-temperature correlation functions for the Fermi Hubbard model, with no evidence of the Fermi sign problem. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we introduce and study a new system of variational inclusions involving (H, eta)-monotone operators in Hilbert space. Using the resolvent operator associated with (H, eta)monotone operators, we prove the existence and uniqueness of solutions for this new system of variational inclusions. We also construct a new algorithm for approximating the solution of this system and discuss the convergence of the sequence of iterates generated by the algorithm. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
For repairable items, the manufacturer has the option to either repair or replace a failed item that is returned under warranty. In this paper, we look at a new warranty servicing strategy for items sold with two-dimensional warranty where the failed item is replaced by a new one when it fails for the first time in a specified region of the warranty and all other failures are repaired minimally. The region is characterised by two parameters and we derive the optimal values for these to minimise the total expected warranty servicing cost. We compare the results with other repair-replace strategies reported in the literature. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, a new control design method is proposed for stable processes which can be described using Hammerstein-Wiener models. The internal model control (IMC) framework is extended to accommodate multiple IMC controllers, one for each subsystem. The concept of passive systems is used to construct the IMC controllers which approximate the inverses of the subsystems to achieve dynamic control performance. The Passivity Theorem is used to ensure the closed-loop stability. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Acetohydroxyacid synthase (AHAS; EC 2.2.1.6) catalyzes the first common step in branched-chain amino acid biosynthesis. The enzyme is inhibited by several chemical classes of compounds and this inhibition is the basis of action of the sulfonylurea and imidazolinone herbicides. The commercial sulfonylureas contain a pyrimidine or a triazine ring that is substituted at both meta positions, thus obeying the initial rules proposed by Levitt. Here we assess the activity of 69 monosubstituted sulfonylurea analogs and related compounds as inhibitors of pure recombinant Arabidopsis thaliana AHAS and show that disubstitution is not absolutely essential as exemplified by our novel herbicide, monosulfuron (2-nitro-N-(4'-methyl-pyrimidin-2'-yl) phenyl-sulfonylurea), which has a pyrimidine ring with a single meta substituent. A subset of these compounds was tested for herbicidal activity and it was shown that their effect in vivo correlates well with their potency in vitro as AHAS inhibitors. Three-dimensional quantitative structure-activity relationships were developed using comparative molecular field analysis and comparative molecular similarity indices analysis. For the latter, the best result was obtained when steric, electrostatic, hydrophobic and H-bond acceptor factors were taken into consideration. The resulting fields were mapped on to the published crystal structure of the yeast enzyme and it was shown that the steric and hydrophobic fields are in good agreement with sulfonylurea-AHAS interaction geometry.
Resumo:
Motivation: Targeting peptides direct nascent proteins to their specific subcellular compartment. Knowledge of targeting signals enables informed drug design and reliable annotation of gene products. However, due to the low similarity of such sequences and the dynamical nature of the sorting process, the computational prediction of subcellular localization of proteins is challenging. Results: We contrast the use of feed forward models as employed by the popular TargetP/SignalP predictors with a sequence-biased recurrent network model. The models are evaluated in terms of performance at the residue level and at the sequence level, and demonstrate that recurrent networks improve the overall prediction performance. Compared to the original results reported for TargetP, an ensemble of the tested models increases the accuracy by 6 and 5% on non-plant and plant data, respectively.
Resumo:
beta-turns are important topological motifs for biological recognition of proteins and peptides. Organic molecules that sample the side chain positions of beta-turns have shown broad binding capacity to multiple different receptors, for example benzodiazepines. beta-turns have traditionally been classified into various types based on the backbone dihedral angles (phi 2, psi 2, phi 3 and psi 3). Indeed, 57-68% of beta-turns are currently classified into 8 different backbone families (Type I, Type II, Type I', Type II', Type VIII, Type VIa1, Type VIa2 and Type VIb and Type IV which represents unclassified beta-turns). Although this classification of beta-turns has been useful, the resulting beta-turn types are not ideal for the design of beta-turn mimetics as they do not reflect topological features of the recognition elements, the side chains. To overcome this, we have extracted beta-turns from a data set of non-homologous and high-resolution protein crystal structures. The side chain positions, as defined by C-alpha-C-beta vectors, of these turns have been clustered using the kth nearest neighbor clustering and filtered nearest centroid sorting algorithms. Nine clusters were obtained that cluster 90% of the data, and the average intra-cluster RMSD of the four C-alpha-C-beta vectors is 0.36. The nine clusters therefore represent the topology of the side chain scaffold architecture of the vast majority of beta-turns. The mean structures of the nine clusters are useful for the development of beta-turn mimetics and as biological descriptors for focusing combinatorial chemistry towards biologically relevant topological space.
Resumo:
Purpose - In many scientific and engineering fields, large-scale heat transfer problems with temperature-dependent pore-fluid densities are commonly encountered. For example, heat transfer from the mantle into the upper crust of the Earth is a typical problem of them. The main purpose of this paper is to develop and present a new combined methodology to solve large-scale heat transfer problems with temperature-dependent pore-fluid densities in the lithosphere and crust scales. Design/methodology/approach - The theoretical approach is used to determine the thickness and the related thermal boundary conditions of the continental crust on the lithospheric scale, so that some important information can be provided accurately for establishing a numerical model of the crustal scale. The numerical approach is then used to simulate the detailed structures and complicated geometries of the continental crust on the crustal scale. The main advantage in using the proposed combination method of the theoretical and numerical approaches is that if the thermal distribution in the crust is of the primary interest, the use of a reasonable numerical model on the crustal scale can result in a significant reduction in computer efforts. Findings - From the ore body formation and mineralization points of view, the present analytical and numerical solutions have demonstrated that the conductive-and-advective lithosphere with variable pore-fluid density is the most favorite lithosphere because it may result in the thinnest lithosphere so that the temperature at the near surface of the crust can be hot enough to generate the shallow ore deposits there. The upward throughflow (i.e. mantle mass flux) can have a significant effect on the thermal structure within the lithosphere. In addition, the emplacement of hot materials from the mantle may further reduce the thickness of the lithosphere. Originality/value - The present analytical solutions can be used to: validate numerical methods for solving large-scale heat transfer problems; provide correct thermal boundary conditions for numerically solving ore body formation and mineralization problems on the crustal scale; and investigate the fundamental issues related to thermal distributions within the lithosphere. The proposed finite element analysis can be effectively used to consider the geometrical and material complexities of large-scale heat transfer problems with temperature-dependent fluid densities.
Resumo:
Online communities have evolved beyond the realm of social phenomenon to become important knowledge-sharing media with real economic consequences. However, the sharing of knowledge and the communication of meaning through Internet technology presents many difficulties. This is particularly so for online finance forums where market-sensitive information and disinformation about exchange-traded stocks is regularly disseminated. The development of trust and the effect of misinformation in this environment are important in the growth of this communication medium. Forum administrators need to better understand and handle the development of trust. In this article, we analyze and discuss the communicative practices of a group of investors and members of an online community of interest. We found that conflict as a driver of knowledge sharing is an important consideration for forum administrators and designers.
Resumo:
Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.