921 resultados para pacs: C6170K knowledge engineering techniques
Resumo:
This paper describes a number of techniques for facilitating reflective critical analysis as a means of eliciting in-depth reflections on practice. The authors have previously used similar techniques in the research context, to assist practitioners to identify and analyse the basis of their work with clients. The techniques presented in this paper have been adapted for use in social work education, including in class-based and field education contexts, and to professional supervision.
Resumo:
Hierarchical knowledge structures are frequently used within clinical decision support systems as part of the model for generating intelligent advice. The nodes in the hierarchy inevitably have varying influence on the decisionmaking processes, which needs to be reflected by parameters. If the model has been elicited from human experts, it is not feasible to ask them to estimate the parameters because there will be so many in even moderately-sized structures. This paper describes how the parameters could be obtained from data instead, using only a small number of cases. The original method [1] is applied to a particular web-based clinical decision support system called GRiST, which uses its hierarchical knowledge to quantify the risks associated with mental-health problems. The knowledge was elicited from multidisciplinary mental-health practitioners but the tree has several thousand nodes, all requiring an estimation of their relative influence on the assessment process. The method described in the paper shows how they can be obtained from about 200 cases instead. It greatly reduces the experts’ elicitation tasks and has the potential for being generalised to similar knowledge-engineering domains where relative weightings of node siblings are part of the parameter space.
Resumo:
This dissertation investigates the very important and current problem of modelling human expertise. This is an apparent issue in any computer system emulating human decision making. It is prominent in Clinical Decision Support Systems (CDSS) due to the complexity of the induction process and the vast number of parameters in most cases. Other issues such as human error and missing or incomplete data present further challenges. In this thesis, the Galatean Risk Screening Tool (GRiST) is used as an example of modelling clinical expertise and parameter elicitation. The tool is a mental health clinical record management system with a top layer of decision support capabilities. It is currently being deployed by several NHS mental health trusts across the UK. The aim of the research is to investigate the problem of parameter elicitation by inducing them from real clinical data rather than from the human experts who provided the decision model. The induced parameters provide an insight into both the data relationships and how experts make decisions themselves. The outcomes help further understand human decision making and, in particular, help GRiST provide more accurate emulations of risk judgements. Although the algorithms and methods presented in this dissertation are applied to GRiST, they can be adopted for other human knowledge engineering domains.
Resumo:
Linked Data semantic sources, in particular DBpedia, can be used to answer many user queries. PowerAqua is an open multi-ontology Question Answering (QA) system for the Semantic Web (SW). However, the emergence of Linked Data, characterized by its openness, heterogeneity and scale, introduces a new dimension to the Semantic Web scenario, in which exploiting the relevant information to extract answers for Natural Language (NL) user queries is a major challenge. In this paper we discuss the issues and lessons learned from our experience of integrating PowerAqua as a front-end for DBpedia and a subset of Linked Data sources. As such, we go one step beyond the state of the art on end-users interfaces for Linked Data by introducing mapping and fusion techniques needed to translate a user query by means of multiple sources. Our first informal experiments probe whether, in fact, it is feasible to obtain answers to user queries by composing information across semantic sources and Linked Data, even in its current form, where the strength of Linked Data is more a by-product of its size than its quality. We believe our experiences can be extrapolated to a variety of end-user applications that wish to scale, open up, exploit and re-use what possibly is the greatest wealth of data about everything in the history of Artificial Intelligence. © 2010 Springer-Verlag.
Resumo:
Most of the existing work on information integration in the Semantic Web concentrates on resolving schema-level problems. Specific issues of data-level integration (instance coreferencing, conflict resolution, handling uncertainty) are usually tackled by applying the same techniques as for ontology schema matching or by reusing the solutions produced in the database domain. However, data structured according to OWL ontologies has its specific features: e.g., the classes are organized into a hierarchy, the properties are inherited, data constraints differ from those defined by database schema. This paper describes how these features are exploited in our architecture KnoFuss, designed to support data-level integration of semantic annotations.
Resumo:
The research is partially supported by Russian Foundation for Basic Research (grants 06-01-81005 and 07-01- 00053)
Resumo:
* The work is partially suported by Russian Foundation for Basic Studies (grant 02-01-00466).
Resumo:
The sharing of near real-time traceability knowledge in supply chains plays a central role in coordinating business operations and is a key driver for their success. However before traceability datasets received from external partners can be integrated with datasets generated internally within an organisation, they need to be validated against information recorded for the physical goods received as well as against bespoke rules defined to ensure uniformity, consistency and completeness within the supply chain. In this paper, we present a knowledge driven framework for the runtime validation of critical constraints on incoming traceability datasets encapuslated as EPCIS event-based linked pedigrees. Our constraints are defined using SPARQL queries and SPIN rules. We present a novel validation architecture based on the integration of Apache Storm framework for real time, distributed computation with popular Semantic Web/Linked data libraries and exemplify our methodology on an abstraction of the pharmaceutical supply chain.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
This paper presents a new hyper-heuristic method using Case-Based Reasoning (CBR) for solving course timetabling problems. The term Hyper-heuristics has recently been employed to refer to 'heuristics that choose heuristics' rather than heuristics that operate directly on given problems. One of the overriding motivations of hyper-heuristic methods is the attempt to develop techniques that can operate with greater generality than is currently possible. The basic idea behind this is that we maintain a case base of information about the most successful heuristics for a range of previous timetabling problems to predict the best heuristic for the new problem in hand using the previous knowledge. Knowledge discovery techniques are used to carry out the training on the CBR system to improve the system performance on the prediction. Initial results presented in this paper are good and we conclude by discussing the con-siderable promise for future work in this area.
Resumo:
The business philosophy of Mass Customisation (MC) implies rapid response to customer requests, high efficiency and limited cost overheads of customisation. Furthermore, it also implies the quality benefits of the mass production paradigm are guaranteed. However, traditional quality science in manufacturing is premised on volume production of uniform products rather than of differentiated products associated with MC. This creates quality challenges and raises questions over the suitability of standard quality engineering techniques. From an analysis of relevant MC and quality literature it is argued the aims of MC are aligned with contemporary thinking on quality and that quality concepts provide insights into MC. Quality issues are considered along three dimensions - product development, order fulfilment and customer interaction. The applicability and effectiveness of conventional quality engineering techniques are discussed and a framework is presented which identifies key issues with respect to quality for a spectrum of MC strategies.
Resumo:
There are several tools in the literature that support innovation in organizations. Some of the most cited are the so-called technology roadmapping methods, also known as TRM. However, these methods are designed primarily for organizations that adopt the market pull strategy of technology-product integration. Organizations that adopt the technology push integration strategy are neglected in the literature. Furthermore, with the advent of open innovation, it is possible to note the need to consider the adoption of partnerships in the innovation process. Thus, this study proposes a method of technology roadmapping, identified as method for technology push (MTP), applicable to organizations that adopt the technology push integration strategy, such as SMEs and independent research centers in an open-innovation environment. The method was developed through action-research and was assessed from two analytical standpoints: externally, via a specific literature review on its theoretical contributions, and internally, through the analysis of potential users` perceptions on the feasibility of applying MTP. The results indicate both the unique character of the method and its perceived implementation feasibility. Future research is suggested in order to validate the method in different types of organizations (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The objective of this research was to examine how various factors in Icelandic cod fishing can influence the quality of the raw material, using traceability systems to link these factors, and how transfer that knowledge and techniques to the Brazilian seafood industry. Data were collected in 2007 and analysed, to find a functional relationship between various quality factors. The analysis, showed, that there is a correlation between the number of parasites in the fillets and location of the fishing ground. It also showed that fishing ground and volume in haul can influence gaping, and that fillet yield differs between fishing grounds. These conclusions could only be drawn because of the ability to trace the fish from catch and all the way through processing. Recommendations drawn from this research to the Brazilian Competent Authority are to revise the countries fisheries legislation in order to enable the implementation of a traceability system that could be used as a tool to improve the quality of the raw material. (C) 2010 Elsevier Ltd. All rights reserved.