982 resultados para Computational Biology
Resumo:
Several gene regulatory network models containing concepts of directionality at the edges have been proposed. However, only a few reports have an interpretable definition of directionality. Here, differently from the standard causality concept defined by Pearl, we introduce the concept of contagion in order to infer directionality at the edges, i.e., asymmetries in gene expression dependences of regulatory networks. Moreover, we present a bootstrap algorithm in order to test the contagion concept. This technique was applied in simulated data and, also, in an actual large sample of biological data. Literature review has confirmed some genes identified by contagion as actually belonging to the TP53 pathway.
Resumo:
We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.
Resumo:
Regression models for the mean quality-adjusted survival time are specified from hazard functions of transitions between two states and the mean quality-adjusted survival time may be a complex function of covariates. We discuss a regression model for the mean quality-adjusted survival (QAS) time based on pseudo-observations, which has the advantage of directly modeling the effect of covariates in the QAS time. Both Monte Carlo Simulations and a real data set are studied. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In many epidemiological studies it is common to resort to regression models relating incidence of a disease and its risk factors. The main goal of this paper is to consider inference on such models with error-prone observations and variances of the measurement errors changing across observations. We suppose that the observations follow a bivariate normal distribution and the measurement errors are normally distributed. Aggregate data allow the estimation of the error variances. Maximum likelihood estimates are computed numerically via the EM algorithm. Consistent estimation of the asymptotic variance of the maximum likelihood estimators is also discussed. Test statistics are proposed for testing hypotheses of interest. Further, we implement a simple graphical device that enables an assessment of the model`s goodness of fit. Results of simulations concerning the properties of the test statistics are reported. The approach is illustrated with data from the WHO MONICA Project on cardiovascular disease. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
We analyze data obtained from a study designed to evaluate training effects on the performance of certain motor activities of Parkinson`s disease patients. Maximum likelihood methods were used to fit beta-binomial/Poisson regression models tailored to evaluate the effects of training on the numbers of attempted and successful specified manual movements in 1 min periods, controlling for disease stage and use of the preferred hand. We extend models previously considered by other authors in univariate settings to account for the repeated measures nature of the data. The results suggest that the expected number of attempts and successes increase with training, except for patients with advanced stages of the disease using the non-preferred hand. Copyright (c) 2008 John Wiley & Sons, Ltd.
Resumo:
We propose a likelihood ratio test ( LRT) with Bartlett correction in order to identify Granger causality between sets of time series gene expression data. The performance of the proposed test is compared to a previously published bootstrapbased approach. LRT is shown to be significantly faster and statistically powerful even within non- Normal distributions. An R package named gGranger containing an implementation for both Granger causality identification tests is also provided.
Resumo:
We describe AMIN (Amidase N-terminal domain), a novel protein domain found specifically in bacterial periplasmic proteins. AMIN domains are widely distributed among peptidoglycan hydrolases and transporter protein families. Based on experimental data, contextual information and phyletic profiles, we suggest that AMIN domains mediate the targeting of periplasmic or extracellular proteins to specific regions of the bacterial envelope.
Resumo:
The hemeprotein myeloperoxidase (MPO) participates in innate immune defense through its ability to generate potent microbicidal oxidants. However, these oxidants are also key mediators of the tissue damage associated with many inflammatory diseases. Thus, there is considerable interest in developing therapeutically useful MPO inhibitors. Here, we used structure-based drug design (SBDD) and ligand-based drug design (LBDD) to select for potentially new and selective MPO inhibitors. A pharmacophore model was developed based on the crystal structure of human MPO in complex with salicylhydroxamic acid (SHA), a known inhibitor of the enzyme. The pharmacophore model was used to screen the ZINC database for potential ligands, which were further filtered on the basis of their physical-chemical properties and docking score. The filtered compounds were visually inspected, and nine were purchased for experimental studies. Surprisingly, almost all of the selected compounds belonged to the aromatic hydrazide class, which had been previously described as MPO inhibitors. The compounds selected by virtual screening were shown to inhibit the chlorinating activity of MPO; the top four compounds displayed IC(50) values ranging from 1.0 to 2.8 mM. MPO inactivation by the most effective compound was shown to be irreversible. Overall, our results show that SBDD and LBDD may be useful for the rational development of new MPO inhibitors.
Resumo:
A myriad of methods are available for virtual screening of small organic compound databases. In this study we have successfully applied a quantitative model of consensus measurements, using a combination of 3D similarity searches (ROCS and EON), Hologram Quantitative Structure Activity Relationships (HQSAR) and docking (FRED, FlexX, Glide and AutoDock Vina), to retrieve cruzain inhibitors from collected databases. All methods were assessed individually and then combined in a Ligand-Based Virtual Screening (LBVS) and Target-Based Virtual Screening (TBVS) consensus scoring, using Receiving Operating Characteristic (ROC) curves to evaluate their performance. Three consensus strategies were used: scaled-rank-by-number, rank-by-rank and rank-by-vote, with the most thriving the scaled-rank-by-number strategy, considering that the stiff ROC curve appeared to be satisfactory in every way to indicate a higher enrichment power at early retrieval of active compounds from the database. The ligand-based method provided access to a robust and predictive HQSAR model that was developed to show superior discrimination between active and inactive compounds, which was also better than ROCS and EON procedures. Overall, the integration of fast computational techniques based on ligand and target structures resulted in a more efficient retrieval of cruzain inhibitors with desired pharmacological profiles that may be useful to advance the discovery of new trypanocidal agents.
Resumo:
Molecular orbital calculations were carried out on a set of 28 non-imidazole H(3) antihistamine compounds using the Hartree-Fock method in order to investigate the possible relationships between electronic structural properties and binding affinity for H3 receptors (pK(i)). It was observed that the frontier effective-for-reaction molecular orbital (FERMO) energies were better correlated with pK(i) values than highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energy values. Exploratory data analysis through hierarchical cluster (HCA) and principal component analysis (PCA) showed a separation of the compounds in two sets, one grouping the molecules with high pK(i) values, the other gathering low pK(i) value compounds. This separation was obtained with the use of the following descriptors: FERMO energies (epsilon(FERMO)), charges derived from the electrostatic potential on the nitrogen atom (N(1)), electronic density indexes for FERMO on the N(1) atom (Sigma((FERMO))c(i)(2)). and electrophilicity (omega`). These electronic descriptors were used to construct a quantitative structure-activity relationship (QSAR) model through the partial least-squares (PLS) method with three principal components. This model generated Q(2) = 0.88 and R(2) = 0.927 values obtained from a training set and external validation of 23 and 5 molecules, respectively. After the analysis of the PLS regression equation and the values for the selected electronic descriptors, it is suggested that high values of FERMO energies and of Sigma((FERMO))c(i)(2), together with low values of electrophilicity and pronounced negative charges on N(1) appear as desirable properties for the conception of new molecules which might have high binding affinity. 2010 Elsevier Inc. All rights reserved.
Resumo:
The practitioners of bioinformatics require increasing sophistication from their software tools to take into account the particular characteristics that make their domain complex. For example, there is a great variation of experience of researchers, from novices who would like guidance from experts in the best resources to use to experts that wish to take greater management control of the tools used in their experiments. Also, the range of available, and conflicting, data formats is growing and there is a desire to automate the many trivial manual stages of in-silico experiments. Agent-oriented software development is one approach to tackling the design of complex applications. In this paper, we argue that, in fact, agent-oriented development is a particularly well-suited approach to developing bioinformatics tools that take into account the wider domain characteristics. To illustrate this, we design a data curation tool, which manages the format of experimental data, extend it to better account for the extra requirements placed by the domain characteristics, and show how the characteristics lead to a system well suited to an agent-oriented view.