34 resultados para Millionaire Problem, Efficiency, Verifiability, Zero Test, Batch Equation
Resumo:
Water pollution caused by toxic cyanobacteria is a problem worldwide, increasing with eutrophication. Due to its biological significance, genotoxicity should be a focus for biomonitoring pollution owing to the increasing complexity of the toxicological environment in which organisms are exposed. Cyanobacteria produce a large number of bioactive compounds, most of which lack toxicological data. Microcystins comprise a class of potent cyclic heptapeptide toxins produced mainly by Microcystis aeruginosa. Other natural products can also be synthesized by cyanobacteria, such as the protease inhibitor, aeruginosin. The hepatotoxicity of microcystins has been well documented, but information on the genotoxic effects of aeruginosins is relatively scarce. In this study, the genotoxicity and ecotoxicity of methanolic extracts from two strains of M. aeruginosa NPLJ-4, containing high levels of microcystin, and M. aeruginosa NPCD-1, with high levels of aeruginosin, were evaluated. Four endpoints, using plant assays in Allium cepa were applied: rootlet growth inhibition, chromosomal aberrations, mitotic divisions, and micronucleus assays. The microcystin content of M. aeruginosa NPLJ-4 was confirmed through ELISA, while M. aeruginosa NPCD-1 did not produce microcystins. The extracts of M. aeruginosa NPLJ-4 were diluted at 0.01, 0.1, 1 and 10 ppb of microcystins: the same procedure was used to dilute M. aeruginosa NPCD-1 used as a parameter for comparison, and water was used as the control. The results demonstrated that both strains inhibited root growth and induced rootlet abnormalities. The strain rich in aeruginosin was more genotoxic, altering the cell cycle, while microcystins were more mitogenic. These findings indicate the need for future research on non-microcystin producing cyanobacterial strains. Understanding the genotoxicity of M. aeruginosa extracts can help determine a possible link between contamination by aquatic cyanobacteria and high risk of primary liver cancer found in some areas as well as establish water level limits for compounds not yet studied. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.
Resumo:
Rare variants are becoming the new candidates in the search for genetic variants that predispose individuals to a phenotype of interest. Their low prevalence in a population requires the development of dedicated detection and analytical methods. A family-based approach could greatly enhance their detection and interpretation because rare variants are nearly family specific. In this report, we test several distinct approaches for analyzing the information provided by rare and common variants and how they can be effectively used to pinpoint putative candidate genes for follow-up studies. The analyses were performed on the mini-exome data set provided by Genetic Analysis Workshop 17. Eight approaches were tested, four using the trait’s heritability estimates and four using QTDT models. These methods had their sensitivity, specificity, and positive and negative predictive values compared in light of the simulation parameters. Our results highlight important limitations of current methods to deal with rare and common variants, all methods presented a reduced specificity and, consequently, prone to false positive associations. Methods analyzing common variants information showed an enhanced sensibility when compared to rare variants methods. Furthermore, our limited knowledge of the use of biological databases for gene annotations, possibly for use as covariates in regression models, imposes a barrier to further research.
Resumo:
In this paper, a general scheme for generating extra cuts during the execution of a Benders decomposition algorithm is presented. These cuts are based on feasible and infeasible master problem solutions generated by means of a heuristic. This article includes general guidelines and a case study with a fixed charge network design problem. Computational tests with instances of this problem show the efficiency of the strategy. The most important aspect of the proposed ideas is their generality, which allows them to be used in virtually any Benders decomposition implementation.