902 resultados para ontology-based analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combining goal-oriented and use case modeling has been proven to be an effective method in requirements elicitation and elaboration. To ensure the quality of such modeled artifacts, a detailed model analysis needs to be performed. However, current requirements engineering approaches generally lack reliable support for automated analysis of consistency, correctness and completeness (3Cs problems) between and within goal models and use case models. In this paper, we present a goal–use case integration framework with tool support to automatically identify such 3Cs problems. Our new framework relies on the use of ontologies of domain knowledge and semantics and our goal–use case integration meta-model. Moreover, functional grammar is employed to enable the semiautomated transformation of natural language specifications into Manchester OWL Syntax for automated reasoning. The evaluation of our tool support shows that for representative example requirements, our approach achieves over 85 % soundness and completeness rates and detects more problems than the benchmark applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combining goal-oriented and use case modeling has been proven to be an effective method in requirements elicitation and elaboration. However, current requirements engineering approaches generally lack reliable support for automated analysis of such modeled artifacts. To address this problem, we have developed GUITAR, a tool which delivers automated detection of incorrectness, incompleteness and inconsistency between artifacts. GUITAR is based on our goal-use case integration meta-model and ontologies of domain knowledge and semantics. GUITAR also provides comprehensive explanations for detected problems and can suggest resolution alternatives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

More and more traditional manufacturing companies form or join inter-organizational networks to bundle their physical products with related services to offer superior value propositions to their customers. Some of these product-related services can be digitized completely and thus fully delivered electronically. Other services require the physical integration of external factors, but can still be coordinated electronically. In both cases companies and consumers face the problem of discovering appropriate product-related service offerings in the network or market. Based on ideas from the web service discovery discipline we propose a meet-in-the-middle approach between heavy-weight semantic technologies and simple boolean search to address this issue. Our approach is able to consider semantic relations in service descriptions and queries and thus delivers better results than syntax-based search. However – unlike most semantic approaches – it does not require the use of any formal language for semantic markup and thus requires less resources and skills for both service providers and consumers. To fully realize the potentials of the proposed approach a domain ontology is needed. In this research-in-progress paper we construct such an ontology for the domain of product-service bundles through analysis and synthesis of related work on service description. This will serve as an anchor for future research to iteratively improve and evaluate the ontology through collaborative design efforts and practical application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an application of an ontology based system for automated text analysis using a sample of a drilling report to demonstrate how the methodology works. The methodology used here consists basically of organizing the knowledge related to the drilling process by elaborating the ontology of some typical problems. The whole process was carried out with the assistance of a drilling expert, and by also using software to collect the knowledge from the texts. Finally, a sample of drilling reports was used to test the system, evaluating its performance on automated text classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main idea of our approach is that the domain ontology is not only the instrument of learning but an object of examining student skills. We propose for students to build the domain ontology of examine discipline and then compare it with etalon one. Analysis of student mistakes allows to propose them personalized recommendations and to improve the course materials in general. For knowledge interoperability we apply Semantic Web technologies. Application of agent-based technologies in e-learning provides the personification of students and tutors and saved all users from the routine operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enterprise Application Integration (EAI) is a challenging area that is attracting growing attention from the software industry and the research community. A landscape of languages and techniques for EAI has emerged and is continuously being enriched with new proposals from different software vendors and coalitions. However, little or no effort has been dedicated to systematically evaluate and compare these languages and techniques. The work reported in this paper is a first step in this direction. It presents an in-depth analysis of a language, namely the Business Modeling Language, specifically developed for EAI. The framework used for this analysis is based on a number of workflow and communication patterns. This framework provides a basis for evaluating the advantages and drawbacks of EAI languages with respect to recurrent problems and situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C.