61 resultados para knowledge system
Resumo:
Classification of metamorphic rocks is normally carried out using a poorly defined, subjective classification scheme making this an area in which many undergraduate geologists experience difficulties. An expert system to assist in such classification is presented which is capable of classifying rocks and also giving further details about a particular rock type. A mixed knowledge representation is used with frame, semantic and production rule systems available. Classification in the domain requires that different facets of a rock be classified. To implement this, rocks are represented by 'context' frames with slots representing each facet. Slots are satisfied by calling a pre-defined ruleset to carry out the necessary inference. The inference is handled by an interpreter which uses a dependency graph representation for the propagation of evidence. Uncertainty is handled by the system using a combination of the MYCIN certainty factor system and the Dempster-Shafer range mechanism. This allows for positive and negative reasoning, with rules capable of representing necessity and sufficiency of evidence, whilst also allowing the implementation of an alpha-beta pruning algorithm to guide question selection during inference. The system also utilizes a semantic net type structure to allow the expert to encode simple relationships between terms enabling rules to be written with a sensible level of abstraction. Using frames to represent rock types where subclassification is possible allows the knowledge base to be built in a modular fashion with subclassification frames only defined once the higher level of classification is functioning. Rulesets can similarly be added in modular fashion with the individual rules being essentially declarative allowing for simple updating and maintenance. The knowledge base so far developed for metamorphic classification serves to demonstrate the performance of the interpreter design whilst also moving some way towards providing a useful assistant to the non-expert metamorphic petrologist. The system demonstrates the possibilities for a fully developed knowledge base to handle the classification of igneous, sedimentary and metamorphic rocks. The current knowledge base and interpreter have been evaluated by potential users and experts. The results of the evaluation show that the system performs to an acceptable level and should be of use as a tool for both undergraduates and researchers from outside the metamorphic petrography field. .
Resumo:
In recent years, freshwater fish farmers have come under increasing pressure from the Water Authorities to control the quality of their farm effluents. This project aimed to investigate methods of treating aquacultural effluent in an efficient and cost-effective manner, and to incorporate the knowledge gained into an Expert System which could then be used in an advice service to farmers. From the results of this research it was established that sedimentation and the use of low pollution diets are the only cost effective methods of controlling the quality of fish farm effluents. Settlement has been extensively investigated and it was found that the removal of suspended solids in a settlement pond is only likely to be effective if the inlet solids concentration is in excess of 8 mg/litre. The probability of good settlement can be enhanced by keeping the ratio of length/retention time (a form of mean fluid velocity) below 4.0 metres/minute. The removal of BOD requires inlet solids concentrations in excess of 20 mg/litre to be effective, and this is seldom attained on commercial fish farms. Settlement, generally, does not remove appreciable quantities of ammonia from effluents, but algae can absorb ammonia by nutrient uptake under certain conditions. The use of low pollution, high performance diets gives pollutant yields which are low when compared with published figures obtained by many previous workers. Two Expert Systems were constructed, both of which diagnose possible causes of poor effluent quality on fish farms and suggest solutions. The first system uses knowledge gained from a literature review and the second employs the knowledge obtained from this project's experimental work. Consent details for over 100 fish farms were obtained from the public registers kept by the Water Authorities. Large variations in policy from one Authority to the next were found. These data have been compiled in a computer file for ease of comparison.
Resumo:
The work described was carried out as part of a collaborative Alvey software engineering project (project number SE057). The project collaborators were the Inter-Disciplinary Higher Degrees Scheme of the University of Aston in Birmingham, BIS Applied Systems Ltd. (BIS) and the British Steel Corporation. The aim of the project was to investigate the potential application of knowledge-based systems (KBSs) to the design of commercial data processing (DP) systems. The work was primarily concerned with BIS's Structured Systems Design (SSD) methodology for DP systems development and how users of this methodology could be supported using KBS tools. The problems encountered by users of SSD are discussed and potential forms of computer-based support for inexpert designers are identified. The architecture for a support environment for SSD is proposed based on the integration of KBS and non-KBS tools for individual design tasks within SSD - The Intellipse system. The Intellipse system has two modes of operation - Advisor and Designer. The design, implementation and user-evaluation of Advisor are discussed. The results of a Designer feasibility study, the aim of which was to analyse major design tasks in SSD to assess their suitability for KBS support, are reported. The potential role of KBS tools in the domain of database design is discussed. The project involved extensive knowledge engineering sessions with expert DP systems designers. Some practical lessons in relation to KBS development are derived from this experience. The nature of the expertise possessed by expert designers is discussed. The need for operational KBSs to be built to the same standards as other commercial and industrial software is identified. A comparison between current KBS and conventional DP systems development is made. On the basis of this analysis, a structured development method for KBSs in proposed - the POLITE model. Some initial results of applying this method to KBS development are discussed. Several areas for further research and development are identified.
Resumo:
Initially this thesis examines the various mechanisms by which technology is acquired within anodizing plants. In so doing the history of the evolution of anodizing technology is recorded, with particular reference to the growth of major markets and to the contribution of the marketing efforts of the aluminium industry. The business economics of various types of anodizing plants are analyzed. Consideration is also given to the impact of developments in anodizing technology on production economics and market growth. The economic costs associated with work rejected for process defects are considered. Recent changes in the industry have created conditions whereby information technology has a potentially important role to play in retaining existing knowledge. One such contribution is exemplified by the expert system which has been developed for the identification of anodizing process defects. Instead of using a "rule-based" expert system, a commercial neural networks program has been adapted for the task. The advantages of neural networks over 'rule-based' systems is that they are better suited to production problems, since the actual conditions prevailing when the defect was produced are often not known with certainty. In using the expert system, the user first identifies the process stage at which the defect probably occurred and is then directed to a file enabling the actual defects to be identified. After making this identification, the user can consult a database which gives a more detailed description of the defect, advises on remedial action and provides a bibliography of papers relating to the defect. The database uses a proprietary hypertext program, which also provides rapid cross-referencing to similar types of defect. Additionally, a graphics file can be accessed which (where appropriate) will display a graphic of the defect on screen. A total of 117 defects are included, together with 221 literature references, supplemented by 48 cross-reference hyperlinks. The main text of the thesis contains 179 literature references. (DX186565)
Resumo:
Hierarchical knowledge structures are frequently used within clinical decision support systems as part of the model for generating intelligent advice. The nodes in the hierarchy inevitably have varying influence on the decisionmaking processes, which needs to be reflected by parameters. If the model has been elicited from human experts, it is not feasible to ask them to estimate the parameters because there will be so many in even moderately-sized structures. This paper describes how the parameters could be obtained from data instead, using only a small number of cases. The original method [1] is applied to a particular web-based clinical decision support system called GRiST, which uses its hierarchical knowledge to quantify the risks associated with mental-health problems. The knowledge was elicited from multidisciplinary mental-health practitioners but the tree has several thousand nodes, all requiring an estimation of their relative influence on the assessment process. The method described in the paper shows how they can be obtained from about 200 cases instead. It greatly reduces the experts’ elicitation tasks and has the potential for being generalised to similar knowledge-engineering domains where relative weightings of node siblings are part of the parameter space.
Resumo:
Mental-health risk assessment practice in the UK is mainly paper-based, with little standardisation in the tools that are used across the Services. The tools that are available tend to rely on minimal sets of items and unsophisticated scoring methods to identify at-risk individuals. This means the reasoning by which an outcome has been determined remains uncertain. Consequently, there is little provision for: including the patient as an active party in the assessment process, identifying underlying causes of risk, and eecting shared decision-making. This thesis develops a tool-chain for the formulation and deployment of a computerised clinical decision support system for mental-health risk assessment. The resultant tool, GRiST, will be based on consensual domain expert knowledge that will be validated as part of the research, and will incorporate a proven psychological model of classication for risk computation. GRiST will have an ambitious remit of being a platform that can be used over the Internet, by both the clinician and the layperson, in multiple settings, and in the assessment of patients with varying demographics. Flexibility will therefore be a guiding principle in the development of the platform, to the extent that GRiST will present an assessment environment that is tailored to the circumstances in which it nds itself. XML and XSLT will be the key technologies that help deliver this exibility.
Resumo:
This dissertation investigates the very important and current problem of modelling human expertise. This is an apparent issue in any computer system emulating human decision making. It is prominent in Clinical Decision Support Systems (CDSS) due to the complexity of the induction process and the vast number of parameters in most cases. Other issues such as human error and missing or incomplete data present further challenges. In this thesis, the Galatean Risk Screening Tool (GRiST) is used as an example of modelling clinical expertise and parameter elicitation. The tool is a mental health clinical record management system with a top layer of decision support capabilities. It is currently being deployed by several NHS mental health trusts across the UK. The aim of the research is to investigate the problem of parameter elicitation by inducing them from real clinical data rather than from the human experts who provided the decision model. The induced parameters provide an insight into both the data relationships and how experts make decisions themselves. The outcomes help further understand human decision making and, in particular, help GRiST provide more accurate emulations of risk judgements. Although the algorithms and methods presented in this dissertation are applied to GRiST, they can be adopted for other human knowledge engineering domains.
Resumo:
Automated negotiation is widely applied in various domains. However, the development of such systems is a complex knowledge and software engineering task. So, a methodology there will be helpful. Unfortunately, none of existing methodologies can offer sufficient, detailed support for such system development. To remove this limitation, this paper develops a new methodology made up of: (1) a generic framework (architectural pattern) for the main task, and (2) a library of modular and reusable design pattern (templates) of subtasks. Thus, it is much easier to build a negotiating agent by assembling these standardised components rather than reinventing the wheel each time. Moreover, since these patterns are identified from a wide variety of existing negotiating agents (especially high impact ones), they can also improve the quality of the final systems developed. In addition, our methodology reveals what types of domain knowledge need to be input into the negotiating agents. This in turn provides a basis for developing techniques to acquire the domain knowledge from human users. This is important because negotiation agents act faithfully on the behalf of their human users and thus the relevant domain knowledge must be acquired from the human users. Finally, our methodology is validated with one high impact system.
Resumo:
With the growth of the multi-national corporation (MNCs) has come the need to understand how parent companies transfer knowledge to, and manage the operations of, their subsidiaries. This is of particular interest to manufacturing companies transferring their operations overseas. Japanese companies in particular have been pioneering in the development of techniques such as Kaizen, and elements of the Toyota Production System (TPS) such as Kanban, which can be useful tools for transferring the ethos of Japanese manufacturing and maintaining quality and control in overseas subsidiaries. Much has been written about the process of transferring Japanese manufacturing techniques but much less is understood about how the subsidiaries themselves – which are required to make use of such techniques – actually acquire and incorporate them into their operations. This research therefore takes the perspective of the subsidiary in examining how knowledge of manufacturing techniques is transferred from the parent company within its surrounding (subsidiary). There is clearly a need to take a practice-based view to understanding how the local managers and operatives incorporate this knowledge into their working practices. A particularly relevant theme is how subsidiaries both replicate and adapt knowledge from parents and the circumstances in which replication or adaptation occurs. However, it is shown that there is a lack of research which takes an in-depth look at these processes from the perspective of the participants themselves. This is particularly important as much knowledge literature argues that knowledge is best viewed as enacted and learned in practice – and therefore transferred in person – rather than by the transfer of abstract and de-contextualised information. What is needed, therefore, is further research which makes an in-depth examination of what happens at the subsidiary level for this transfer process to occur. There is clearly a need to take a practice-based view to understanding how the local managers and operatives incorporate knowledge about manufacturing techniques into their working practices. In depth qualitative research was, therefore, conducted in the subsidiary of a Japanese multinational, Gambatte Corporation, involving three main manufacturing initiatives (or philosophies), namely 'TPS‘, 'TPM‘ and 'TS‘. The case data were derived from 52 in-depth interviews with project members, moderate-participant observations, and documentations and presented and analysed in episodes format. This study contributes to our understanding of knowledge transfer in relation to the approaches and circumstances of adaptation and replication of knowledge within the subsidiary, how the whole process is developed, and also how 'innovation‘ takes place. This study further understood that the process of knowledge transfer could be explained as a process of Reciprocal Provider-Learner Exchange that can be linked to the Experiential Learning Theory.
Resumo:
An applied psychological framework for coping with performance uncertainty in sport and work systems is presented. The theme of personal control serves to integrate ideas prevalent in industrial and organisational psychology, the stress literature and labour process theory. These commonly focus on the promotion of tacit knowledge and learned resourcefulness in individual performers. Finally, data from an empirical evaluation of a development training programme to facilitate self-regulation skills in professional athletes are briefly highlighted.
Resumo:
Procedural knowledge is the knowledge required to perform certain tasks, and forms an important part of expertise. A major source of procedural knowledge is natural language instructions. While these readable instructions have been useful learning resources for human, they are not interpretable by machines. Automatically acquiring procedural knowledge in machine interpretable formats from instructions has become an increasingly popular research topic due to their potential applications in process automation. However, it has been insufficiently addressed. This paper presents an approach and an implemented system to assist users to automatically acquire procedural knowledge in structured forms from instructions. We introduce a generic semantic representation of procedures for analysing instructions, using which natural language techniques are applied to automatically extract structured procedures from instructions. The method is evaluated in three domains to justify the generality of the proposed semantic representation as well as the effectiveness of the implemented automatic system.
Resumo:
The semantic web vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic markup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which takes queries expressed in natural language and an ontology as input, and returns answers drawn from one or more knowledge bases (KBs). We say that AquaLog is portable because the configuration time required to customize the system for a particular ontology is negligible. AquaLog presents an elegant solution in which different strategies are combined together in a novel way. It makes use of the GATE NLP platform, string metric algorithms, WordNet and a novel ontology-based relation similarity service to make sense of user queries with respect to the target KB. Moreover it also includes a learning component, which ensures that the performance of the system improves over the time, in response to the particular community jargon used by end users.
Resumo:
The semantic web (SW) vision is one in which rich, ontology-based semantic markup will become widely available. The availability of semantic markup on the web opens the way to novel, sophisticated forms of question answering. AquaLog is a portable question-answering system which takes queries expressed in natural language (NL) and an ontology as input, and returns answers drawn from one or more knowledge bases (KB). AquaLog presents an elegant solution in which different strategies are combined together in a novel way. AquaLog novel ontology-based relation similarity service makes sense of user queries.
Resumo:
This paper describes the knowledge elicitation and knowledge representation aspects of a system being developed to help with the design and maintenance of relational data bases. The size algorithmic components. In addition, the domain contains multiple experts, but any given expert's knowledge of this large domain is only partial. The paper discusses the methods and techniques used for knowledge elicitation, which was based on a "broad and shallow" approach at first, moving to a "narrow and deep" one later, and describes the models used for knowledge representation, which were based on a layered "generic and variants" approach. © 1995.
Resumo:
This paper starts from the viewpoint that enterprise risk management is a specific application of knowledge in order to control deviations from strategic objectives, shareholders’ values and stakeholders’ relationships. This study is looking for insights into how the application of knowledge management processes can improve the implementation of enterprise risk management. This article presents the preliminary results of a survey on this topic carried out in the financial services sector, extending a previous pilot study that was in retail banking only. Five hypotheses about the relationship of knowledge management variables to the perceived value of ERM implementation were considered. The survey results show that the two people-related variables, perceived quality of communication among groups and perceived quality of knowledge sharing were positively associated with the perceived value of ERM implementation. However, the results did not support a positive association for the three variables more related to technology, namely network capacity for connecting people (which was marginally significant), risk management information system functionality and perceived integration of the information systems. Perceived quality of communication among groups appeared to be clearly the most significant of these five factors in affecting the perceived value of ERM implementation.