902 resultados para Web Service Modelling Ontology (WSMO)
Resumo:
Conceptual Modelling approaches for the web need extensions to specify dynamic personalization properties in order to design more powerful web applications. Current approaches provide techniques to support dynamic personalization, usually focused on implementation details. This article presents an extension of the OO-H conceptual modeling approach to address the particulars associated with the design and specification of dynamic personalization. The main benefit is that this specification can be modified without recompile the rest of the application modules. We describe how conventional navigation and presentation diagrams are influenced by personalization properties. In order to model the variable part of the interface logic OO-H has a personalization architecture that leans on a rule engine. Rules are defined based on a User Model and a Reference Model.
Resumo:
Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
An interoperable web processing service (WPS) for the automatic interpolation of environmental data has been developed in the frame of the INTAMAP project. In order to assess the performance of the interpolation method implemented, a validation WPS has also been developed. This validation WPS can be used to perform leave one out and K-fold cross validation: a full dataset is submitted and a range of validation statistics and diagnostic plots (e.g. histograms, variogram of residuals, mean errors) is received in return. This paper presents the architecture of the validation WPS and a case study is used to briefly illustrate its use in practice. We conclude with a discussion on the current limitations of the system and make proposals for further developments.
Resumo:
Component-based development (CBD) has become an important emerging topic in the software engineering field. It promises long-sought-after benefits such as increased software reuse, reduced development time to market and, hence, reduced software production cost. Despite the huge potential, the lack of reasoning support and development environment of component modeling and verification may hinder its development. Methods and tools that can support component model analysis are highly appreciated by industry. Such a tool support should be fully automated as well as efficient. At the same time, the reasoning tool should scale up well as it may need to handle hundreds or even thousands of components that a modern software system may have. Furthermore, a distributed environment that can effectively manage and compose components is also desirable. In this paper, we present an approach to the modeling and verification of a newly proposed component model using Semantic Web languages and their reasoning tools. We use the Web Ontology Language and the Semantic Web Rule Language to precisely capture the inter-relationships and constraints among the entities in a component model. Semantic Web reasoning tools are deployed to perform automated analysis support of the component models. Moreover, we also proposed a service-oriented architecture (SOA)-based semantic web environment for CBD. The adoption of Semantic Web services and SOA make our component environment more reusable, scalable, dynamic and adaptive.
Internet banking service quality:an investigation of interrelationships between construct dimensions
Resumo:
Service quality measurement in Internet banking services is an area of growing interest to researchers and managers. This research investigates the interrelationships between the dimensions comprising the Internet banking service quality construct through structural equation modelling. Five Internet service quality dimensions are identified: access, web interface, trust, attention and credibility. Credibility is modelled as an outcome of the causal variables of access, web interface, trust and attention. Trust and attention emerges as key dimensions in explaining the credibility dimension. Access is found to be a common antecedent of trust, attention and Web interface dimensions. Implications from the findings are offered.
Resumo:
Over the past forty years the corporate identity literature has developed to a point of maturity where it currently contains many definitions and models of the corporate identity construct at the organisational level. The literature has evolved by developing models of corporate identity or in considering corporate identity in relation to new and developing themes, e.g. corporate social responsibility. It has evolved into a multidisciplinary domain recently incorporating constructs from other literature to further its development. However, the literature has a number of limitations. It remains that an overarching and universally accepted definition of corporate identity is elusive, potentially leaving the construct with a lack of clear definition. Only a few corporate identity definitions and models, at the corporate level, have been empirically tested. The corporate identity construct is overwhelmingly defined and theoretically constructed at the corporate level, leaving the literature without a detailed understanding of its influence at an individual stakeholder level. Front-line service employees (FLEs), form a component in a number of corporate identity models developed at the organisational level. FLEs deliver the services of an organisation to its customers, as well as represent the organisation by communicating and transporting its core defining characteristics to customers through continual customer contact and interaction. This person-to-person contact between an FLE and the customer is termed a service encounter, where service encounters influence a customer’s perception of both the service delivered and the associated level of service quality. Therefore this study for the first time defines, theoretically models and empirically tests corporate identity at the individual FLE level, termed FLE corporate identity. The study uses the services marketing literature to characterise an FLE’s operating environment, arriving at five potential dimensions to the FLE corporate identity construct. These are scrutinised against existing corporate identity definitions and models to arrive at a definition for the construct. In reviewing the corporate identity, services marketing, branding and organisational psychology literature, a theoretical model is developed for FLE corporate identity, which is empirically and quantitatively tested, with FLEs in seven stores of a major national retailer. Following rigorous construct reliability and validity testing, the 601 usable responses are used to estimate a confirmatory factor analysis and structural equation model for the study. The results for the individual hypotheses and the structural model are very encouraging, as they fit the data well and support a definition of FLE corporate identity. This study makes contributions to the branding, services marketing and organisational psychology literature, but its principal contribution is to extend the corporate identity literature into a new area of discourse and research, that of FLE corporate identity
Resumo:
This work investigates the process of selecting, extracting and reorganizing content from Semantic Web information sources, to produce an ontology meeting the specifications of a particular domain and/or task. The process is combined with traditional text-based ontology learning methods to achieve tolerance to knowledge incompleteness. The paper describes the approach and presents experiments in which an ontology was built for a diet evaluation task. Although the example presented concerns the specific case of building a nutritional ontology, the methods employed are domain independent and transferrable to other use cases. © 2011 ACM.
Resumo:
The semantic web vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic markup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which takes queries expressed in natural language and an ontology as input, and returns answers drawn from one or more knowledge bases (KBs). We say that AquaLog is portable because the configuration time required to customize the system for a particular ontology is negligible. AquaLog presents an elegant solution in which different strategies are combined together in a novel way. It makes use of the GATE NLP platform, string metric algorithms, WordNet and a novel ontology-based relation similarity service to make sense of user queries with respect to the target KB. Moreover it also includes a learning component, which ensures that the performance of the system improves over the time, in response to the particular community jargon used by end users.
Resumo:
The usability of research papers on the Web would be enhanced by a system that explicitly modelled the rhetorical relations between claims in related papers. We describe ClaiMaker, a system for modelling readers’ interpretations of the core content of papers. ClaiMaker provides tools to build a Semantic Web representation of the claims in research papers using an ontology of relations. We demonstrate how the system can be used to make inter-document queries.
Resumo:
Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.
Resumo:
Objectives: To develop a decision support system (DSS), myGRaCE, that integrates service user (SU) and practitioner expertise about mental health and associated risks of suicide, self-harm, harm to others, self-neglect, and vulnerability. The intention is to help SUs assess and manage their own mental health collaboratively with practitioners. Methods: An iterative process involving interviews, focus groups, and agile software development with 115 SUs, to elicit and implement myGRaCE requirements. Results: Findings highlight shared understanding of mental health risk between SUs and practitioners that can be integrated within a single model. However, important differences were revealed in SUs' preferred process of assessing risks and safety, which are reflected in the distinctive interface, navigation, tool functionality and language developed for myGRaCE. A challenge was how to provide flexible access without overwhelming and confusing users. Conclusion: The methods show that practitioner expertise can be reformulated in a format that simultaneously captures SU expertise, to provide a tool highly valued by SUs. A stepped process adds necessary structure to the assessment, each step with its own feedback and guidance. Practice Implications: The GRiST web-based DSS (www.egrist.org) links and integrates myGRaCE self-assessments with GRiST practitioner assessments for supporting collaborative and self-managed healthcare.
Resumo:
The sharing of product and process information plays a central role in coordinating supply chains operations and is a key driver for their success. "Linked pedigrees" - linked datasets, that encapsulate event based traceability information of artifacts as they move along the supply chain, provide a scalable mechanism to record and facilitate the sharing of track and trace knowledge among supply chain partners. In this paper we present "OntoPedigree" a content ontology design pattern for the representation of linked pedigrees, that can be specialised and extended to define domain specific traceability ontologies. Events captured within the pedigrees are specified using EPCIS - a GS1 standard for the specification of traceability information within and across enterprises, while certification information is described using PROV - a vocabulary for modelling provenance of resources. We exemplify the utility of OntoPedigree in linked pedigrees generated for supply chains within the perishable goods and pharmaceuticals sectors.
Resumo:
In the years 2004 and 2005 we collected samples of phytoplankton, zooplankton and macroinvertebrates in an artificial small pond in Budapest. We set up a simulation model predicting the abundance of the cyclopoids, Eudiaptomus zachariasi and Ischnura pumilio by considering only temperature as it affects the abundance of population of the previous day. Phytoplankton abundance was simulated by considering not only temperature, but the abundance of the three mentioned groups. This discrete-deterministic model could generate similar patterns like the observed one and testing it on historical data was successful. However, because the model was overpredicting the abundances of Ischnura pumilio and Cyclopoida at the end of the year, these results were not considered. Running the model with the data series of climate change scenarios, we had an opportunity to predict the individual numbers for the period around 2050. If the model is run with the data series of the two scenarios UKHI and UKLO, which predict drastic global warming, then we can observe a decrease in abundance and shift in the date of the maximum abundance occurring (excluding Ischnura pumilio, where the maximum abundance increases and it occurs later), whereas under unchanged climatic conditions (BASE scenario) the change in abundance is negligible. According to the scenarios GFDL 2535, GFDL 5564 and UKTR, a transition could be noticed.
Resumo:
Acknowledgements and funding We would like to thank the GPs who took part in this study. We would also like to thank Marie Pitkethly and Gail Morrison for their help and support in recruiting GPs to the study. WIME was funded by the Chief Scientist Office, grant number CZH/4/610. The Health Services Research Unit, University of Aberdeen, is core funded by the Chief Scientist Office of the Scottish Government Health Directorates.