983 resultados para Domain Ontology


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper aims to present a preliminary version of asupport-system in the air transport passenger domain. This system relies upon an underlying on-tological structure representing a normative framework to facilitatethe provision of contextualized relevant legal information.This information includes the pas-senger's rights and itenhances self-litigation and the decision-making process of passengers.Our contribution is based in the attempt of rendering a user-centric-legal informationgroundedon case-scenarios of the most pronounced incidents related to the consumer complaints in the EU.A number ofadvantages with re-spect to the current state-of-the-art services are discussed and a case study illu-strates a possible technological application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In practical terms, conceptual modeling is at the core of systems analysis and design. The plurality of modeling methods available has however been regarded as detrimental, and as a strong indication that a common view or theoretical grounding of modeling is wanting. This theoretical foundation must universally address all potential matters to be represented in a model, which consequently suggested ontology as the point of departure for theory development. The Bunge–Wand–Weber (BWW) ontology has become a widely accepted modeling theory. Its application has simultaneously led to the recognition that, although suitable as a meta-model, the BWW ontology needs to be enhanced regarding its expressiveness in empirical domains. In this paper, a first step in this direction has been made by revisiting BUNGE’s ontology, and by proposing the integration of a “hierarchy of systems” in the BWW ontology for accommodating domain specific conceptualizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A by-product of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the explosive growth of the Web, the domain of Web personalization has gained great momentum both in the research and commercial areas. One of the most popular web personalization systems is recommender systems. In recommender systems choosing user information that can be used to profile users is very crucial for user profiling. In Web 2.0, one facility that can help users organize Web resources of their interest is user tagging systems. Exploring user tagging behavior provides a promising way for understanding users’ information needs since tags are given directly by users. However, free and relatively uncontrolled vocabulary makes the user self-defined tags lack of standardization and semantic ambiguity. Also, the relationships among tags need to be explored since there are rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach for learning tag ontology based on the widely used lexical database WordNet for capturing the semantics and the structural relationships of tags. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. To personalize further, clustering of users is performed to generate a more accurate ontology for a particular group of users. In order to evaluate the usefulness of the tag ontology, we use the tag ontology in a pilot tag recommendation experiment for improving the recommendation performance by exploiting the semantic information in the tag ontology. The initial result shows that the personalized information has improved the accuracy of the tag recommendation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article discusses the prospects of quantum psychiatry from a Bohmian point of view, which provides an ontological interpretation of quantum theory, and extends such ontology to include mind. At first, we discuss the more general relevance of quantum theory to psychopathology. The basic idea is that because quantum theory emphasizes the role of wholeness, it might be relevant to psychopathology, where breakdown of unity in the mental domain is a key feature. We then discuss the role of information in psychopathology, and consider the connections with quantum theory in this area. In particular, we discuss David Bohm’s notion of active information, which arises in the ontological interpretation of quantum theory, and is suggested to play a fundamental role as the bridge between mind and matter. Some such bridge is needed if we are to understand how subtle mental properties are able to influence more manifest physical properties in the brain (all the way to the molecular and possibly microtubular level), and how changes in those possibly quantum‐level physical processes are able to influence higher cognitive functions. We also consider the implications of the notion of active information for psychopathology. The prospects of implementing the Bohmian scheme in neuroquantal terms are then briefly considered. Finally, we discuss some possible therapeutic implications of Bohm’s approach to information and the relation of mind and matter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article discusses the prospects of quantum psychiatry from a Bohmian point of view, which provides an ontological interpretation of quantum theory, and extends such ontology to include mind. At first, we discuss the more general relevance of quantum theory to psychopathology. The basic idea is that because quantum theory emphasizes the role of wholeness, it might be relevant to psychopathology, where breakdown of unity in the mental domain is a key feature. We then discuss the role of information in psychopathology, and consider the connections with quantum theory in this area. In particular, we discuss David Bohm’s notion of active information, which arises in the ontological interpretation of quantum theory, and is suggested to play a fundamental role as the bridge between mind and matter. Some such bridge is needed if we are to understand how subtle mental properties are able to influence more manifest physical properties in the brain (all the way to the molecular and possibly microtubular level), and how changes in those possibly quantum‐level physical processes are able to influence higher cognitive functions. We also consider the implications of the notion of active information for psychopathology. The prospects of implementing the Bohmian scheme in neuroquantal terms are then briefly considered. Finally, we discuss some possible therapeutic implications of Bohm’s approach to information and the relation of mind and matter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Establishing functional relationships between multi-domain protein sequences is a non-trivial task. Traditionally, delineating functional assignment and relationships of proteins requires domain assignments as a prerequisite. This process is sensitive to alignment quality and domain definitions. In multi-domain proteins due to multiple reasons, the quality of alignments is poor. We report the correspondence between the classification of proteins represented as full-length gene products and their functions. Our approach differs fundamentally from traditional methods in not performing the classification at the level of domains. Our method is based on an alignment free local matching scores (LMS) computation at the amino-acid sequence level followed by hierarchical clustering. As there are no gold standards for full-length protein sequence classification, we resorted to Gene Ontology and domain-architecture based similarity measures to assess our classification. The final clusters obtained using LMS show high functional and domain architectural similarities. Comparison of the current method with alignment based approaches at both domain and full-length protein showed superiority of the LMS scores. Using this method we have recreated objective relationships among different protein kinase sub-families and also classified immunoglobulin containing proteins where sub-family definitions do not exist currently. This method can be applied to any set of protein sequences and hence will be instrumental in analysis of large numbers of full-length protein sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: A hierarchical taxonomy of organisms is a prerequisite for semantic integration of biodiversity data. Ideally, there would be a single, expansive, authoritative taxonomy that includes extinct and extant taxa, information on synonyms and common names, and monophyletic supraspecific taxa that reflect our current understanding of phylogenetic relationships. DESCRIPTION: As a step towards development of such a resource, and to enable large-scale integration of phenotypic data across vertebrates, we created the Vertebrate Taxonomy Ontology (VTO), a semantically defined taxonomic resource derived from the integration of existing taxonomic compilations, and freely distributed under a Creative Commons Zero (CC0) public domain waiver. The VTO includes both extant and extinct vertebrates and currently contains 106,947 taxonomic terms, 22 taxonomic ranks, 104,736 synonyms, and 162,400 cross-references to other taxonomic resources. Key challenges in constructing the VTO included (1) extracting and merging names, synonyms, and identifiers from heterogeneous sources; (2) structuring hierarchies of terms based on evolutionary relationships and the principle of monophyly; and (3) automating this process as much as possible to accommodate updates in source taxonomies. CONCLUSIONS: The VTO is the primary source of taxonomic information used by the Phenoscape Knowledgebase (http://phenoscape.org/), which integrates genetic and evolutionary phenotype data across both model and non-model vertebrates. The VTO is useful for inferring phenotypic changes on the vertebrate tree of life, which enables queries for candidate genes for various episodes in vertebrate evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n = 3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this
ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes a new approach for developing UML software designs from Natural Language (NL), making use of a meta-domain oriented ontology, well established software design principles and Natural Language Processing (NLP) tools. In the approach described here, banks of grammatical rules are used to assign event flows from essential use cases. A domain specific ontology is also constructed, permitting semantic mapping between the NL input and the modeled domain. Rules based on the widely-used General Responsibility Assignment Software Principles (GRASP) are then applied to derive behavioral models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ontologies have been established for knowledge sharing and are widely used as a means for conceptually structuring domains of interest. With the growing usage of ontologies, the problem of overlapping knowledge in a common domain becomes critical. In this short paper, we address two methods for merging ontologies based on Formal Concept Analysis: FCA-Merge and ONTEX. --- FCA-Merge is a method for merging ontologies following a bottom-up approach which offers a structural description of the merging process. The method is guided by application-specific instances of the given source ontologies. We apply techniques from natural language processing and formal concept analysis to derive a lattice of concepts as a structural result of FCA-Merge. The generated result is then explored and transformed into the merged ontology with human interaction. --- ONTEX is a method for systematically structuring the top-down level of ontologies. It is based on an interactive, top-down- knowledge acquisition process, which assures that the knowledge engineer considers all possible cases while avoiding redundant acquisition. The method is suited especially for creating/merging the top part(s) of the ontologies, where high accuracy is required, and for supporting the merging of two (or more) ontologies on that level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is problematic to use standard ontology tools when describing vague domains. Standard ontologies are designed to formally define one view of a domain, and although it is possible to define disagreeing statements, it is not advisable, as the resulting inferences could be incorrect. Two different solutions to the above problem in two different vague domains have been developed and are presented. The first domain is the knowledge base of conversational agents (chatbots). An ontological scripting language has been designed to access ontology data from within chatbot code. The solution developed is based on reifications of user statements. It enables a new layer of logics based on the different views of the users, enabling the body of knowledge to grow automatically. The second domain is competencies and competency frameworks. An ontological framework has been developed to model different competencies using the emergent standards. It enables comparison of competencies using a mix of linguistic logics and descriptive logics. The comparison results are non-binary, therefore not simple yes and no answers, highlighting the vague nature of the comparisons. The solution has been developed with small ontologies which can be added to and modified in order for the competency user to build a total picture that fits the user’s purpose. Finally these two approaches are viewed in the light of how they could aid future work in vague domains, further work in both domains is described and also in other domains such as the semantic web. This demonstrates two different approaches to achieve inferences using standard ontology tools in vague domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: This document identifies the challenges and opportunities in applying the ontology technology in the Human Resources domain. Target users: A reference for both the HR and the ontology communities. Also, to be used as a roadmap for the OOA itself, within the HR domain. Background: During the discussion panel at the OOA kick-off workshop, which was attended by more than 50 HR and ontology experts, the need for this roadmap was realized. It was obvious that the current understanding of the problem of semantics in HR is fragmented and only partial solutions exist. People from both the HR and the ontology communities speak different languages, have different understandings, and are not aware of existing solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Increasing costs of health care, fuelled by demand for high quality, cost-effective healthcare has drove hospitals to streamline their patient care delivery systems. One such systematic approach is the adaptation of Clinical Pathways (CP) as a tool to increase the quality of healthcare delivery. However, most organizations still rely on are paper-based pathway guidelines or specifications, which have limitations in process management and as a result can influence patient safety outcomes. In this paper, we present a method for generating clinical pathways based on organizational semiotics by capturing knowledge from syntactic, semantic and pragmatic to social level. Design/methodology/approach: The proposed modeling approach to generation of CPs adopts organizational semiotics and enables the generation of semantically rich representation of CP knowledge. Semantic Analysis Method (SAM) is applied to explicitly represent the semantics of the concepts, their relationships and patterns of behavior in terms of an ontology chart. Norm Analysis Method (NAM) is adopted to identify and formally specify patterns of behavior and rules that govern the actions identified on the ontology chart. Information collected during semantic and norm analysis is integrated to guide the generation of CPs using best practice represented in BPMN thus enabling the automation of CP. Findings: This research confirms the necessity of taking into consideration social aspects in designing information systems and automating CP. The complexity of healthcare processes can be best tackled by analyzing stakeholders, which we treat as social agents, their goals and patterns of action within the agent network. Originality/value: The current modeling methods describe CPs from a structural aspect comprising activities, properties and interrelationships. However, these methods lack a mechanism to describe possible patterns of human behavior and the conditions under which the behavior will occur. To overcome this weakness, a semiotic approach to generation of clinical pathway is introduced. The CP generated from SAM together with norms will enrich the knowledge representation of the domain through ontology modeling, which allows the recognition of human responsibilities and obligations and more importantly, the ultimate power of decision making in exceptional circumstances.