959 resultados para Ontology Alignments Negotiations
Resumo:
En aquest treball s'analitza el funcionament de Semantic MediaWiki, que es basa en MediaWiki, una wiki tradicional mundialment coneguda per ser la base amb què es va desenvolupar Wikipedia. Abans s'identifiquen les principals tecnologies de la web semàntica i es defineix el propòsit de les wikis semàntiques.
Resumo:
Disseny i implementació d'una ontologia per representar els serveis que ofereix un aeroport als passatgers.
Resumo:
Gene expression patterns are a key feature in understanding gene function, notably in development. Comparing gene expression patterns between animals is a major step in the study of gene function as well as of animal evolution. It also provides a link between genes and phenotypes. Thus we have developed Bgee, a database designed to compare expression patterns between animals, by implementing ontologies describing anatomies and developmental stages of species, and then designing homology relationships between anatomies and comparison criteria between developmental stages. To define homology relationships between anatomical features we have developed the software Homolonto, which uses a modified ontology alignment approach to propose homology relationships between ontologies. Bgee then uses these aligned ontologies, onto which heterogeneous expression data types are mapped. These already include microarrays and ESTs.
Resumo:
Background: Two genes are called synthetic lethal (SL) if mutation of either alone is not lethal, but mutation of both leads to death or a significant decrease in organism's fitness. The detection of SL gene pairs constitutes a promising alternative for anti-cancer therapy. As cancer cells exhibit a large number of mutations, the identification of these mutated genes' SL partners may provide specific anti-cancer drug candidates, with minor perturbations to the healthy cells. Since existent SL data is mainly restricted to yeast screenings, the road towards human SL candidates is limited to inference methods. Results: In the present work, we use phylogenetic analysis and database manipulation (BioGRID for interactions, Ensembl and NCBI for homology, Gene Ontology for GO attributes) in order to reconstruct the phylogenetically-inferred SL gene network for human. In addition, available data on cancer mutated genes (COSMIC and Cancer Gene Census databases) as well as on existent approved drugs (DrugBank database) supports our selection of cancer-therapy candidates.Conclusions: Our work provides a complementary alternative to the current methods for drug discovering and gene target identification in anti-cancer research. Novel SL screening analysis and the use of highly curated databases would contribute to improve the results of this methodology.
Resumo:
We address the problem of comparing and characterizing the promoter regions of genes with similar expression patterns. This remains a challenging problem in sequence analysis, because often the promoter regions of co-expressed genes do not show discernible sequence conservation. In our approach, thus, we have not directly compared the nucleotide sequence of promoters. Instead, we have obtained predictions of transcription factor binding sites, annotated the predicted sites with the labels of the corresponding binding factors, and aligned the resulting sequences of labels—to which we refer here as transcription factor maps (TF-maps). To obtain the global pairwise alignment of two TF-maps, we have adapted an algorithm initially developed to align restriction enzyme maps. We have optimized the parameters of the algorithm in a small, but well-curated, collection of human–mouse orthologous gene pairs. Results in this dataset, as well as in an independent much larger dataset from the CISRED database, indicate that TF-map alignments are able to uncover conserved regulatory elements, which cannot be detected by the typical sequence alignments.
Resumo:
Background: Different regions in a genome evolve at different rates depending on structural and functional constraints. Some genomic regions are highly conserved during metazoan evolution, while other regions may evolve rapidly, either in all species or in a lineage-specific manner. A strong or even moderate change in constraints in functional regions, for example in coding regions, can have significant evolutionary consequences. Results: Here we discuss a novel framework, 'BaseDiver', to classify groups of genes in humans based on the patterns of evolutionary constraints on polymorphic positions in their coding regions. Comparing the nucleotide-level divergence among mammals with the extent of deviation from the ancestral base in the human lineage, we identify patterns of evolutionary pressure on nonsynonymous base-positions in groups of genes belonging to the same functional category. Focussing on groups of genes in functional categories, we find that transcription factors contain a significant excess of nonsynonymous base-positions that are conserved in other mammals but changed in human, while immunity related genes harbour mutations at base-positions that evolve rapidly in all mammals including humans due to strong preference for advantageous alleles. Genes involved in olfaction also evolve rapidly in all mammals, and in humans this appears to be due to weak negative selection. Conclusion: While recent studies have identified genes under positive selection in humans, our approach identifies evolutionary constraints on Gene Ontology groups identifying changes in humans relative to some of the other mammals.
Resumo:
Background: Peach fruit undergoes a rapid softening process that involves a number of metabolic changes. Storing fruit at low temperatures has been widely used to extend its postharvest life. However, this leads to undesired changes, such as mealiness and browning, which affect the quality of the fruit. In this study, a 2-D DIGE approach was designed to screen for differentially accumulated proteins in peach fruit during normal softening as well as under conditions that led to fruit chilling injury. Results:The analysis allowed us to identify 43 spots -representing about 18% of the total number analyzed- that show statistically significant changes. Thirty-nine of the proteins could be identified by mass spectrometry. Some of the proteins that changed during postharvest had been related to peach fruit ripening and cold stress in the past. However, we identified other proteins that had not been linked to these processes. A graphical display of the relationship between the differentially accumulated proteins was obtained using pairwise average-linkage cluster analysis and principal component analysis. Proteins such as endopolygalacturonase, catalase, NADP-dependent isocitrate dehydrogenase, pectin methylesterase and dehydrins were found to be very important for distinguishing between healthy and chill injured fruit. A categorization of the differentially accumulated proteins was performed using Gene Ontology annotation. The results showed that the 'response to stress', 'cellular homeostasis', 'metabolism of carbohydrates' and 'amino acid metabolism' biological processes were affected the most during the postharvest. Conclusions: Using a comparative proteomic approach with 2-D DIGE allowed us to identify proteins that showed stage-specific changes in their accumulation pattern. Several proteins that are related to response to stress, cellular homeostasis, cellular component organization and carbohydrate metabolism were detected as being differentially accumulated. Finally, a significant proportion of the proteins identified had not been associated with softening, cold storage or chilling injury-altered fruit before; thus, comparative proteomics has proven to be a valuable tool for understanding fruit softening and postharvest.
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
Tumor necrosis factor (TNF) ligand and receptor superfamily members play critical roles in diverse developmental and pathological settings. In search for novel TNF superfamily members, we identified a murine chromosomal locus that contains three new TNF receptor-related genes. Sequence alignments suggest that the ligand binding regions of these murine TNF receptor homologues, mTNFRH1, -2 and -3, are most homologous to those of the tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) receptors. By using a number of in vitro ligand-receptor binding assays, we demonstrate that mTNFRH1 and -2, but not mTNFRH3, bind murine TRAIL, suggesting that they are indeed TRAIL receptors. This notion is further supported by our demonstration that both mTNFRH1:Fc and mTNFRH2:Fc fusion proteins inhibited mTRAIL-induced apoptosis of Jurkat cells. Unlike the only other known murine TRAIL receptor mTRAILR2, however, neither mTNFRH2 nor mTNFRH3 has a cytoplasmic region containing the well characterized death domain motif. Coupled with our observation that overexpression of mTNFRH1 and -2 in 293T cells neither induces apoptosis nor triggers NFkappaB activation, we propose that the mTnfrh1 and mTnfrh2 genes encode the first described murine decoy receptors for TRAIL, and we renamed them mDcTrailr1 and -r2, respectively. Interestingly, the overall sequence structures of mDcTRAILR1 and -R2 are quite distinct from those of the known human decoy TRAIL receptors, suggesting that the presence of TRAIL decoy receptors represents a more recent evolutionary event.
Resumo:
This paper links different political liberal theories, considered from the perspective of their moral ontology, with federal democracies. After giving a brief description of these theories, I discuss their relationship with the theoretical and institutional models of federalism. As methodological tools, the paper introduces some Hegel’s political concepts and deals with their potential application to the analysis of federalism, taken into account the case of minorities in multinational democracies. I postulate the need for a moral and institutional refinement of liberal-democratic patterns that is better able to accommodate national pluralism than has so far been achieved by traditional constitutionalism.
Resumo:
Particle physics studies highly complex processes which cannot be directly observed. Scientific realism claims that we are nevertheless warranted in believing that these processes really occur and that the objects involved in them really exist. This dissertation defends a version of scientific realism, called causal realism, in the context of particle physics. I start by introducing the central theses and arguments in the recent philosophical debate on scientific realism (chapter 1), with a special focus on an important presupposition of the debate, namely common sense realism. Chapter 2 then discusses entity realism, which introduces a crucial element into the debate by emphasizing the importance of experiments in defending scientific realism. Most of the chapter is concerned with Ian Hacking's position, but I also argue that Nancy Cartwright's version of entity realism is ultimately preferable as a basis for further development. In chapter 3,1 take a step back and consider the question whether the realism debate is worth pursuing at all. Arthur Fine has given a negative answer to that question, proposing his natural ontologica! attitude as an alternative to both realism and antirealism. I argue that the debate (in particular the realist side of it) is in fact less vicious than Fine presents it. The second part of my work (chapters 4-6) develops, illustrates and defends causal realism. The key idea is that inference to the best explanation is reliable in some cases, but not in others. Chapter 4 characterizes the difference between these two kinds of cases in terms of three criteria which distinguish causal from theoretical warrant. In order to flesh out this distinction, chapter 5 then applies it to a concrete case from the history of particle physics, the discovery of the neutrino. This case study shows that the distinction between causal and theoretical warrant is crucial for understanding what it means to "directly detect" a new particle. But the distinction is also an effective tool against what I take to be the presently most powerful objection to scientific realism: Kyle Stanford's argument from unconceived alternatives. I respond to this argument in chapter 6, and I illustrate my response with a discussion of Jean Perrin's experimental work concerning the atomic hypothesis. In the final part of the dissertation, I turn to the specific challenges posed to realism by quantum theories. One of these challenges comes from the experimental violations of Bell's inequalities, which indicate a failure of locality in the quantum domain. I show in chapter 7 how causal realism can further our understanding of quantum non-locality by taking account of some recent experimental results. Another challenge to realism in quantum mechanics comes from delayed-choice experiments, which seem to imply that certain aspects of what happens in an experiment can be influenced by later choices of the experimenter. Chapter 8 analyzes these experiments and argues that they do not warrant the antirealist conclusions which some commentators draw from them. It pays particular attention to the case of delayed-choice entanglement swapping and the corresponding question whether entanglement is a real physical relation. In chapter 9,1 finally address relativistic quantum theories. It is often claimed that these theories are incompatible with a particle ontology, and this calls into question causal realism's commitment to localizable and countable entities. I defend the commitments of causal realism against these objections, and I conclude with some remarks connecting the interpretation of quantum field theory to more general metaphysical issues confronting causal realism.
Resumo:
In the past, research in ontology learning from text has mainly focused on entity recognition, taxonomy induction and relation extraction. In this work we approach a challenging research issue: detecting semantic frames from texts and using them to encode web ontologies. We exploit a new generation Natural Language Processing technology for frame detection, and we enrich the frames acquired so far with argument restrictions provided by a super-sense tagger and domain specializations. The results are encoded according to a Linguistic MetaModel, which allows a complete translation of lexical resources and data acquired from text, enabling custom transformations of the enriched frames into modular ontology components.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.
Resumo:
The InterPro database (http://www.ebi.ac.uk/interpro/) is a freely available resource that can be used to classify sequences into protein families and to predict the presence of important domains and sites. Central to the InterPro database are predictive models, known as signatures, from a range of different protein family databases that have different biological focuses and use different methodological approaches to classify protein families and domains. InterPro integrates these signatures, capitalizing on the respective strengths of the individual databases, to produce a powerful protein classification resource. Here, we report on the status of InterPro as it enters its 15th year of operation, and give an overview of new developments with the database and its associated Web interfaces and software. In particular, the new domain architecture search tool is described and the process of mapping of Gene Ontology terms to InterPro is outlined. We also discuss the challenges faced by the resource given the explosive growth in sequence data in recent years. InterPro (version 48.0) contains 36 766 member database signatures integrated into 26 238 InterPro entries, an increase of over 3993 entries (5081 signatures), since 2012.
Resumo:
The sequence profile method (Gribskov M, McLachlan AD, Eisenberg D, 1987, Proc Natl Acad Sci USA 84:4355-4358) is a powerful tool to detect distant relationships between amino acid sequences. A profile is a table of position-specific scores and gap penalties, providing a generalized description of a protein motif, which can be used for sequence alignments and database searches instead of an individual sequence. A sequence profile is derived from a multiple sequence alignment. We have found 2 ways to improve the sensitivity of sequence profiles: (1) Sequence weights: Usage of individual weights for each sequence avoids bias toward closely related sequences. These weights are automatically assigned based on the distance of the sequences using a published procedure (Sibbald PR, Argos P, 1990, J Mol Biol 216:813-818). (2) Amino acid substitution table: In addition to the alignment, the construction of a profile also needs an amino acid substitution table. We have found that in some cases a new table, the BLOSUM45 table (Henikoff S, Henikoff JG, 1992, Proc Natl Acad Sci USA 89:10915-10919), is more sensitive than the original Dayhoff table or the modified Dayhoff table used in the current implementation. Profiles derived by the improved method are more sensitive and selective in a number of cases where previous methods have failed to completely separate true members from false positives.