24 resultados para Bioinformatics
em Universidade do Minho
Resumo:
Traffic Engineering (TE) approaches are increasingly impor- tant in network management to allow an optimized configuration and resource allocation. In link-state routing, the task of setting appropriate weights to the links is both an important and a challenging optimization task. A number of different approaches has been put forward towards this aim, including the successful use of Evolutionary Algorithms (EAs). In this context, this work addresses the evaluation of three distinct EAs, a single and two multi-objective EAs, in two tasks related to weight setting optimization towards optimal intra-domain routing, knowing the network topology and aggregated traffic demands and seeking to mini- mize network congestion. In both tasks, the optimization considers sce- narios where there is a dynamic alteration in the state of the system, in the first considering changes in the traffic demand matrices and in the latter considering the possibility of link failures. The methods will, thus, need to simultaneously optimize for both conditions, the normal and the altered one, following a preventive TE approach towards robust configurations. Since this can be formulated as a bi-objective function, the use of multi-objective EAs, such as SPEA2 and NSGA-II, came nat- urally, being those compared to a single-objective EA. The results show a remarkable behavior of NSGA-II in all proposed tasks scaling well for harder instances, and thus presenting itself as the most promising option for TE in these scenarios.
Resumo:
Earthworks tasks aim at levelling the ground surface at a target construction area and precede any kind of structural construction (e.g., road and railway construction). It is comprised of sequential tasks, such as excavation, transportation, spreading and compaction, and it is strongly based on heavy mechanical equipment and repetitive processes. Under this context, it is essential to optimize the usage of all available resources under two key criteria: the costs and duration of earthwork projects. In this paper, we present an integrated system that uses two artificial intelligence based techniques: data mining and evolutionary multi-objective optimization. The former is used to build data-driven models capable of providing realistic estimates of resource productivity, while the latter is used to optimize resource allocation considering the two main earthwork objectives (duration and cost). Experiments held using real-world data, from a construction site, have shown that the proposed system is competitive when compared with current manual earthwork design.
Resumo:
Microbiology as a scientific discipline recognised the need to preserve microorganisms for scientific studies establishing from its very beginning research culture collections (CC). Later on, to better serve different scientific fields and bioindustries with the increasing number of strains of scientific, medical, ecological and biotechnological importance public service CC were established with the specific aims to support their user communities. Currently, the more developed public service CC are recognised as microBiological Resources Centres (mBRC). mBRC are considered to be one of the key elements for sustainable international scientific infrastructure, which is necessary to underpin successful delivery of the benefits of biotechnology, whether within the health sector, the industrial sector or other sectors, and in turn ensure that these advances help drive economic growth. In more detail, mBRCs are defined by Organisation for Economic Co-operation and Development (OECD) as service providers and repositories of the living cells, genomes of organisms, and information relating to heredity and functions of biological systems. mBRCs contain collections of culturable organisms (e.g., microorganisms, plant, animal cells), replicable parts of these (e.g. genomes, plasmids, virus, cDNAs), viable but not yet culturable organisms, cells and tissues, as well as database containing molecular, physiological and structural information relevant to these collections and related bioinformatics. Thus mBRCs are fundamental to harnessing and preserving the world’s microbial biodiversity and genetic resources and serve as an essential element of the infrastructure for research and development. mBRCs serve a multitude of functions and assume a range of shapes and forms. Some are large national centres performing a comprehensive role providing access to diverse organisms. Other centres play much narrower, yet important, roles supplying limited but crucial specialised resources. In the era of the knowledge-based bio-economy mBRCs are recognised as vital element to underpinning the biotechnology.
Resumo:
In this study, the metabolomics characterization focusing on the carotenoid composition of ten cassava (Manihot esculenta) genotypes cultivated in southern Brazil by UV-visible scanning spectrophotometry and reverse phase-high performance liquid chromatography was performed. Cassava roots rich in -carotene are an important staple food for populations with risk of vitamin A deficiency. Cassava genotypes with high pro-vitamin A activity have been identified as a strategy to reduce the prevalence of deficiency of this vitamin. The data set was used for the construction of a descriptive model by chemometric analysis. The genotypes of yellow-fleshed roots were clustered by the higher concentrations of cis--carotene and lutein. Inversely, cream-fleshed roots genotypes were grouped precisely due to their lower concentrations of these pigments, as samples rich in lycopene (redfleshed) differed among the studied genotypes. The analytical approach (UV-Vis, HPLC, and chemometrics) used showed to be efficient for understanding the chemodiversity of cassava genotypes, allowing to classify them according to important features for human health and nutrition.
Resumo:
Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.
Resumo:
To better understand the dynamic behavior of metabolic networks in a wide variety of conditions, the field of Systems Biology has increased its interest in the use of kinetic models. The different databases, available these days, do not contain enough data regarding this topic. Given that a significant part of the relevant information for the development of such models is still wide spread in the literature, it becomes essential to develop specific and powerful text mining tools to collect these data. In this context, this work has as main objective the development of a text mining tool to extract, from scientific literature, kinetic parameters, their respective values and their relations with enzymes and metabolites. The approach proposed integrates the development of a novel plug-in over the text mining framework @Note2. In the end, the pipeline developed was validated with a case study on Kluyveromyces lactis, spanning the analysis and results of 20 full text documents.
Resumo:
DNA microarrays are one of the most used technologies for gene expression measurement. However, there are several distinct microarray platforms, from different manufacturers, each with its own measurement protocol, resulting in data that can hardly be compared or directly integrated. Data integration from multiple sources aims to improve the assertiveness of statistical tests, reducing the data dimensionality problem. The integration of heterogeneous DNA microarray platforms comprehends a set of tasks that range from the re-annotation of the features used on gene expression, to data normalization and batch effect elimination. In this work, a complete methodology for gene expression data integration and application is proposed, which comprehends a transcript-based re-annotation process and several methods for batch effect attenuation. The integrated data will be used to select the best feature set and learning algorithm for a brain tumor classification case study. The integration will consider data from heterogeneous Agilent and Affymetrix platforms, collected from public gene expression databases, such as The Cancer Genome Atlas and Gene Expression Omnibus.
Resumo:
Transcriptional Regulatory Networks (TRNs) are powerful tool for representing several interactions that occur within a cell. Recent studies have provided information to help researchers in the tasks of building and understanding these networks. One of the major sources of information to build TRNs is biomedical literature. However, due to the rapidly increasing number of scientific papers, it is quite difficult to analyse the large amount of papers that have been published about this subject. This fact has heightened the importance of Biomedical Text Mining approaches in this task. Also, owing to the lack of adequate standards, as the number of databases increases, several inconsistencies concerning gene and protein names and identifiers are common. In this work, we developed an integrated approach for the reconstruction of TRNs that retrieve the relevant information from important biological databases and insert it into a unique repository, named KREN. Also, we applied text mining techniques over this integrated repository to build TRNs. However, was necessary to create a dictionary of names and synonyms associated with these entities and also develop an approach that retrieves all the abstracts from the related scientific papers stored on PubMed, in order to create a corpora of data about genes. Furthermore, these tasks were integrated into @Note, a software system that allows to use some methods from the Biomedical Text Mining field, including an algorithms for Named Entity Recognition (NER), extraction of all relevant terms from publication abstracts, extraction relationships between biological entities (genes, proteins and transcription factors). And finally, extended this tool to allow the reconstruction Transcriptional Regulatory Networks through using scientific literature.
Resumo:
Liver diseases have severe patients’ consequences, being one of the main causes of premature death. These facts reveal the centrality of one`s daily habits, and how important it is the early diagnosis of these kind of illnesses, not only to the patients themselves, but also to the society in general. Therefore, this work will focus on the development of a diagnosis support system to these kind of maladies, built under a formal framework based on Logic Programming, in terms of its knowledge representation and reasoning procedures, complemented with an approach to computing grounded on Artificial Neural Networks.
Resumo:
About 90% of breast cancers do not cause or are capable of producing death if detected at an early stage and treated properly. Indeed, it is still not known a specific cause for the illness. It may be not only a beginning, but also a set of associations that will determine the onset of the disease. Undeniably, there are some factors that seem to be associated with the boosted risk of the malady. Pondering the present study, different breast cancer risk assessment models where considered. It is our intention to develop a hybrid decision support system under a formal framework based on Logic Programming for knowledge representation and reasoning, complemented with an approach to computing centered on Artificial Neural Networks, to evaluate the risk of developing breast cancer and the respective Degree-of-Confidence that one has on such a happening.
Resumo:
Dissertação de mestrado em Plant Molecular Biology, Biotechnology and Bioentrepeneurship
Resumo:
This research aims to advance blinking detection in the context of work activity. Rather than patients having to attend a clinic, blinking videos can be acquired in a work environment, and further automatically analyzed. Therefore, this paper presents a methodology to perform the automatic detection of eye blink using consumer videos acquired with low-cost web cameras. This methodology includes the detection of the face and eyes of the recorded person, and then it analyzes the low-level features of the eye region to create a quantitative vector. Finally, this vector is classified into one of the two categories considered —open and closed eyes— by using machine learning algorithms. The effectiveness of the proposed methodology was demonstrated since it provides unbiased results with classification errors under 5%
Resumo:
Architectural (bad) smells are design decisions found in software architectures that degrade the ability of systems to evolve. This paper presents an approach to verify that a software architecture is smellfree using the Archery architectural description language. The language provides a core for modelling software architectures and an extension for specifying constraints. The approach consists in precisely specifying architectural smells as constraints, and then verifying that software architectures do not satisfy any of them. The constraint language is based on a propositional modal logic with recursion that includes: a converse operator for relations among architectural concepts, graded modalities for describing the cardinality in such relations, and nominals referencing architectural elements. Four architectural smells illustrate the approach.
Resumo:
Text Mining has opened a vast array of possibilities concerning automatic information retrieval from large amounts of text documents. A variety of themes and types of documents can be easily analyzed. More complex features such as those used in Forensic Linguistics can gather deeper understanding from the documents, making possible performing di cult tasks such as author identi cation. In this work we explore the capabilities of simpler Text Mining approaches to author identification of unstructured documents, in particular the ability to distinguish poetic works from two of Fernando Pessoas' heteronyms: Alvaro de Campos and Ricardo Reis. Several processing options were tested and accuracies of 97% were reached, which encourage further developments.
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.