113 resultados para Developed applications
Resumo:
Background: High-throughput SNP genotyping has become an essential requirement for molecular breeding and population genomics studies in plant species. Large scale SNP developments have been reported for several mainstream crops. A growing interest now exists to expand the speed and resolution of genetic analysis to outbred species with highly heterozygous genomes. When nucleotide diversity is high, a refined diagnosis of the target SNP sequence context is needed to convert queried SNPs into high-quality genotypes using the Golden Gate Genotyping Technology (GGGT). This issue becomes exacerbated when attempting to transfer SNPs across species, a scarcely explored topic in plants, and likely to become significant for population genomics and inter specific breeding applications in less domesticated and less funded plant genera. Results: We have successfully developed the first set of 768 SNPs assayed by the GGGT for the highly heterozygous genome of Eucalyptus from a mixed Sanger/454 database with 1,164,695 ESTs and the preliminary 4.5X draft genome sequence for E. grandis. A systematic assessment of in silico SNP filtering requirements showed that stringent constraints on the SNP surrounding sequences have a significant impact on SNP genotyping performance and polymorphism. SNP assay success was high for the 288 SNPs selected with more rigorous in silico constraints; 93% of them provided high quality genotype calls and 71% of them were polymorphic in a diverse panel of 96 individuals of five different species. SNP reliability was high across nine Eucalyptus species belonging to three sections within subgenus Symphomyrtus and still satisfactory across species of two additional subgenera, although polymorphism declined as phylogenetic distance increased. Conclusions: This study indicates that the GGGT performs well both within and across species of Eucalyptus notwithstanding its nucleotide diversity >= 2%. The development of a much larger array of informative SNPs across multiple Eucalyptus species is feasible, although strongly dependent on having a representative and sufficiently deep collection of sequences from many individuals of each target species. A higher density SNP platform will be instrumental to undertake genome-wide phylogenetic and population genomics studies and to implement molecular breeding by Genomic Selection in Eucalyptus.
Resumo:
Propolis possesses various biological activities such as antibacterial, antifungal, anti-inflammatory, anesthetic and antioxidant properties. A topically applied product based on Brazilian green propolis was developed for the treatment of burns. For such substance to be used more safely in future clinical applications, the present study evaluated the mutagenic potential of topical formulations supplemented with green propolis extract (1.2, 2.4 and 3.6%) based on the analysis of chromosomal aberrations and of micronuclei. In the in vitro studies, 3-h pulse (G(1) phase of the cell cycle) and continuous (20 h) treatments were performed. In the in vivo assessment, the animals were injured on the back and then submitted to acute (24 h), subacute (7 days) and subchronic (30 days) treatments consisting of daily dermal applications of gels containing different concentrations of propolis. Similar frequencies of chromosomal aberrations were observed for cultures submitted to 3-h pulse and continuous treatment with gels containing different propolis concentrations and cultures not submitted to any treatment. However, in the continuous treatment cultures treated with the 3.6% propolis gel presented significantly lower mitotic indices than the negative control. No statistically significant differences in the frequencies of micronuclei were observed between animals treated with gels containing different concentrations of propolis and the negative control for the three treatment times. Under the present conditions, topical formulations containing different concentrations of green propolis used for the treatment of burns showed no mutagenic effect in either test system, but 3.6% propolis gel was found to be cytotoxic in the in vitro test.
Resumo:
Hardy-Weinberg Equilibrium (HWE) is an important genetic property that populations should have whenever they are not observing adverse situations as complete lack of panmixia, excess of mutations, excess of selection pressure, etc. HWE for decades has been evaluated; both frequentist and Bayesian methods are in use today. While historically the HWE formula was developed to examine the transmission of alleles in a population from one generation to the next, use of HWE concepts has expanded in human diseases studies to detect genotyping error and disease susceptibility (association); Ryckman and Williams (2008). Most analyses focus on trying to answer the question of whether a population is in HWE. They do not try to quantify how far from the equilibrium the population is. In this paper, we propose the use of a simple disequilibrium coefficient to a locus with two alleles. Based on the posterior density of this disequilibrium coefficient, we show how one can conduct a Bayesian analysis to verify how far from HWE a population is. There are other coefficients introduced in the literature and the advantage of the one introduced in this paper is the fact that, just like the standard correlation coefficients, its range is bounded and it is symmetric around zero (equilibrium) when comparing the positive and the negative values. To test the hypothesis of equilibrium, we use a simple Bayesian significance test, the Full Bayesian Significance Test (FBST); see Pereira, Stern andWechsler (2008) for a complete review. The disequilibrium coefficient proposed provides an easy and efficient way to make the analyses, especially if one uses Bayesian statistics. A routine in R programs (R Development Core Team, 2009) that implements the calculations is provided for the readers.
Resumo:
Background Data and Objective: There is anecdotal evidence that low-level laser therapy (LLLT) may affect the development of muscular fatigue, minor muscle damage, and recovery after heavy exercises. Although manufacturers claim that cluster probes (LEDT) maybe more effective than single-diode lasers in clinical settings, there is a lack of head-to-head comparisons in controlled trials. This study was designed to compare the effect of single-diode LLLT and cluster LEDT before heavy exercise. Materials and Methods: This was a randomized, placebo-controlled, double-blind cross-over study. Young male volleyball players (n = 8) were enrolled and asked to perform three Wingate cycle tests after 4 x 30 sec LLLT or LEDT pretreatment of the rectus femoris muscle with either (1) an active LEDT cluster-probe (660/850 nm, 10/30mW), (2) a placebo cluster-probe with no output, and (3) a single-diode 810-nm 200-mW laser. Results: The active LEDT group had significantly decreased post-exercise creatine kinase (CK) levels (-18.88 +/- 41.48U/L), compared to the placebo cluster group (26.88 +/- 15.18U/L) (p < 0.05) and the active single-diode laser group (43.38 +/- 32.90U/L) (p<0.01). None of the pre-exercise LLLT or LEDT protocols enhanced performance on the Wingate tests or reduced post-exercise blood lactate levels. However, a non-significant tendency toward lower post-exercise blood lactate levels in the treated groups should be explored further. Conclusion: In this experimental set-up, only the active LEDT probe decreased post-exercise CK levels after the Wingate cycle test. Neither performance nor blood lactate levels were significantly affected by this protocol of pre-exercise LEDT or LLLT.
Resumo:
In this work an iterative strategy is developed to tackle the problem of coupling dimensionally-heterogeneous models in the context of fluid mechanics. The procedure proposed here makes use of a reinterpretation of the original problem as a nonlinear interface problem for which classical nonlinear solvers can be applied. Strong coupling of the partitions is achieved while dealing with different codes for each partition, each code in black-box mode. The main application for which this procedure is envisaged arises when modeling hydraulic networks in which complex and simple subsystems are treated using detailed and simplified models, correspondingly. The potentialities and the performance of the strategy are assessed through several examples involving transient flows and complex network configurations.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.
Resumo:
An (n, d)-expander is a graph G = (V, E) such that for every X subset of V with vertical bar X vertical bar <= 2n - 2 we have vertical bar Gamma(G)(X) vertical bar >= (d + 1) vertical bar X vertical bar. A tree T is small if it has at most n vertices and has maximum degree at most d. Friedman and Pippenger (1987) proved that any ( n; d)- expander contains every small tree. However, their elegant proof does not seem to yield an efficient algorithm for obtaining the tree. In this paper, we give an alternative result that does admit a polynomial time algorithm for finding the immersion of any small tree in subgraphs G of (N, D, lambda)-graphs Lambda, as long as G contains a positive fraction of the edges of Lambda and lambda/D is small enough. In several applications of the Friedman-Pippenger theorem, including the ones in the original paper of those authors, the (n, d)-expander G is a subgraph of an (N, D, lambda)-graph as above. Therefore, our result suffices to provide efficient algorithms for such previously non-constructive applications. As an example, we discuss a recent result of Alon, Krivelevich, and Sudakov (2007) concerning embedding nearly spanning bounded degree trees, the proof of which makes use of the Friedman-Pippenger theorem. We shall also show a construction inspired on Wigderson-Zuckerman expander graphs for which any sufficiently dense subgraph contains all trees of sizes and maximum degrees achieving essentially optimal parameters. Our algorithmic approach is based on a reduction of the tree embedding problem to a certain on-line matching problem for bipartite graphs, solved by Aggarwal et al. (1996).
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Nitrogen is the nutrient that is most absorbed by the corn crop, with the most complex management, and has the highest share on the cost of corn production. The objective of this work was to evaluate the economic viability of different rates and split-applications of nitrogen fertilization, as such as urea, in the corn crop in a eutrophic Red Latosol (Oxisol). The study was carried out in the Experimental Station of the Regional Pole of the Sao Paulo Northwest Agribusiness Development (APTA), in Votuporanga, State of Sao Paulo, Brazil. The experimental design was randomized complete blocks with nine treatments and four replications, consisting of five N rates: 0, 55, 95, 135 and 175 kg ha(-1), 15 kg ha-l applied in the seeding and the remainder in top dressing: 40 and 80 kg ha(-1) N at forty days after seeding (DAS), or 1/2 + 1/2 at 20 and 40 DAS; 120 kg ha-1 N split in 1/2 + 1/2 or 1/3 + 1/3 + 1/3 at 20, 40 or 60 DAS; 160 kg ha(-1) N split in 1/4 + 3/8 + 3/8 or 114 + 1/4 + 1/4 + 1/4 at 20, 40, 60 and 80 DAS. The application of 135 kg ha-l of N split in three times provided the best benefit/cost ratio. The non-application of N provided the lowest economic return, proving to be unviable.
Resumo:
A long-term field experiment was carried out in the experiment farm of the Sao Paulo State University, Brazil, to evaluate the phytoavailability of Zn, Cd and Pb in a Typic Eutrorthox soil treated with sewage sludge for nine consecutive years, using the sequential extraction and organic matter fractionation methods. During 2005-2006, maize (Zea mays L.) was used as test plants and the experimental design was in randomized complete blocks with four treatments and five replicates. The treatments consisted of four sewage sludge rates (in a dry basis): 0.0 (control, with mineral fertilization), 45.0, 90.0 and 127.5 t ha(-1), annually for nine years. Before maize sowing, the sewage sludge was manually applied to the soil and incorporated at 10 cm depth. Soil samples (0-20 cm layer) for Zn, Cd and Pb analysis were collected 60 days after sowing. The successive applications of sewage sludge to the soil did not affect heavy metal (Cd and Pb) fractions in the soil, with exception of Zn fractions. The Zn, Cd and Pb distributions in the soil were strongly associated with humin and residual fractions, which are characterized by stable chemical bonds. Zinc, Cd and Pb in the soil showed low phytoavailability after nine-year successive applications of sewage sludge to the soil.
Resumo:
The analysis of one-, two-, and three-dimensional coupled map lattices is here developed under a statistical and dynamical perspective. We show that the three-dimensional CML exhibits low dimensional behavior with long range correlation and the power spectrum follows 1/f noise. This approach leads to an integrated understanding of the most important properties of these universal models of spatiotemporal chaos. We perform a complete time series analysis of the model and investigate the dependence of the signal properties by change of dimension. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Introduction: Internet users are increasingly using the worldwide web to search for information relating to their health. This situation makes it necessary to create specialized tools capable of supporting users in their searches. Objective: To apply and compare strategies that were developed to investigate the use of the Portuguese version of Medical Subject Headings (MeSH) for constructing an automated classifier for Brazilian Portuguese-language web-based content within or outside of the field of healthcare, focusing on the lay public. Methods: 3658 Brazilian web pages were used to train the classifier and 606 Brazilian web pages were used to validate it. The strategies proposed were constructed using content-based vector methods for text classification, such that Naive Bayes was used for the task of classifying vector patterns with characteristics obtained through the proposed strategies. Results: A strategy named InDeCS was developed specifically to adapt MeSH for the problem that was put forward. This approach achieved better accuracy for this pattern classification task (0.94 sensitivity, specificity and area under the ROC curve). Conclusions: Because of the significant results achieved by InDeCS, this tool has been successfully applied to the Brazilian healthcare search portal known as Busca Saude. Furthermore, it could be shown that MeSH presents important results when used for the task of classifying web-based content focusing on the lay public. It was also possible to show from this study that MeSH was able to map out mutable non-deterministic characteristics of the web. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Several high temperature superconductor (HTS) tapes have been developed since the late eighties. Due to the new techniques applied for their production, HTS tapes are becoming feasible and practical for many applications. In this work, we present the test results of five commercial HTS tapes from the BSCCO and YBCO families (short samples of 200 mm). We have measured and analyzed their intrinsic and extrinsic properties and compared their behaviors for fault current limiter (FCL) applications. Electrical measurements were performed to determine the critical current and the n value through the V-I relationship under DC and AC magnetic fields. The resistance per unit length was determined as a function of temperature. The magnetic characteristics were analyzed through susceptibility curves as a function of temperature. As transport current generates a magnetic field surrounding the HTS material, the magnetic measurements indicate the magnetic field supported by the tapes under a peak current 1.5 times higher than the critical current, I(c). By pulsed current tests the recovery time and the energy/volume during a current fault were also analyzed. These results are in agreement with the data found in the literature giving the most appropriate performance conductor for a FCL device (I(peak) = 4 kA) to be used in a 220 V-60 Hz grid.