853 resultados para Expert Systems Building Tools
Resumo:
The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).
Resumo:
"Contract number 99-7-4646-04-142-01."
Resumo:
These are the full proceedings of the conference.
Resumo:
Expert systems, and artificial intelligence more generally, can provide a useful means for representing decision-making processes. By linking expert systems software to simulation software an effective means of including these decision-making processes in a simulation model can be achieved. This paper demonstrates how a commercial-off-the-shelf simulation package (Witness) can be linked to an expert systems package (XpertRule) through a Visual Basic interface. The methodology adopted could be used for models, and possibly software, other than those presented here.
Resumo:
Health and safety policies may be regarded as the cornerstone for positive prevention of occupational accidents and diseases. The Health and Safety at Work, etc Act 1974 makes it a legal duty for employers to prepare and revise a written statement of a general policy with respect to the health and safety at work of employees as well as the organisation and arrangements for carrying out that policy. Despite their importance and the legal equipment to prepare them, health and safety policies have been found, in a large number of plastics processing companies (particularly small companies), to be poorly prepared, inadequately implemented and monitored. An important cause of these inadequacies is the lack of necessary health and safety knowledge and expertise to prepare, implement and monitor policies. One possible way of remedying this problem is to investigate the feasibility of using computers to develop expert system programs to simulate the health and safety (HS) experts' task of preparing the policies and assisting companies implement and monitor them. Such programs use artificial intelligence (AI) techniques to solve this sort of problems which are heuristic in nature and require symbolic reasoning. Expert systems have been used successfully in a variety of fields such as medicine and engineering. An important phase in the feasibility of development of such systems is the engineering of knowledge which consists of identifying the knowledge required, eliciting, structuring and representing it in an appropriate computer programming language.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the ordered weight averaging (OWA) operators of decision-making units (DMUs). OWA was originally introduced by Yager (IEEE Trans Syst Man Cybern 18(1):183-190, 1988) has gained much interest among researchers, hence many applications such as in the areas of decision making, expert systems, data mining, approximate reasoning, fuzzy system and control have been proposed. On the other hand, the SAS is powerful software and it is capable of running various optimization tools such as linear and non-linear programming with all type of constraints. To facilitate the use of OWA operator by SAS users, a code was implemented. The SAS macro developed in this paper selects the criteria and alternatives from a SAS dataset and calculates a set of OWA weights. An example is given to illustrate the features of SAS/OWA software. © Springer-Verlag 2009.
Resumo:
In recent years, freshwater fish farmers have come under increasing pressure from the Water Authorities to control the quality of their farm effluents. This project aimed to investigate methods of treating aquacultural effluent in an efficient and cost-effective manner, and to incorporate the knowledge gained into an Expert System which could then be used in an advice service to farmers. From the results of this research it was established that sedimentation and the use of low pollution diets are the only cost effective methods of controlling the quality of fish farm effluents. Settlement has been extensively investigated and it was found that the removal of suspended solids in a settlement pond is only likely to be effective if the inlet solids concentration is in excess of 8 mg/litre. The probability of good settlement can be enhanced by keeping the ratio of length/retention time (a form of mean fluid velocity) below 4.0 metres/minute. The removal of BOD requires inlet solids concentrations in excess of 20 mg/litre to be effective, and this is seldom attained on commercial fish farms. Settlement, generally, does not remove appreciable quantities of ammonia from effluents, but algae can absorb ammonia by nutrient uptake under certain conditions. The use of low pollution, high performance diets gives pollutant yields which are low when compared with published figures obtained by many previous workers. Two Expert Systems were constructed, both of which diagnose possible causes of poor effluent quality on fish farms and suggest solutions. The first system uses knowledge gained from a literature review and the second employs the knowledge obtained from this project's experimental work. Consent details for over 100 fish farms were obtained from the public registers kept by the Water Authorities. Large variations in policy from one Authority to the next were found. These data have been compiled in a computer file for ease of comparison.
Resumo:
Ignorance of user factors can be seen as one of the nontechnical issues contributing to expert system failure. An expert advisory system is built for nonexpert users; the users' acceptance is a very important factor for its successful implementation. If an expert advisory system satisfactorily represents the expertise in the domain, there still remains the question: "Will the end-users use the system?" This paper aims to address users' issues by analysing their reactions towards an expert advisory system called ADGAME, developed to help its users make better decisions in playing a competitive business game. Two experiments with ADGAME have been carried out. The research results show that, when the use of the expert advisory system is optional, there is considerable reluctance to use it, particularly amongst the "worst" potential users. Users also doubt the potential benefits in terms of improved learning and confidence in decisions made. Strangely, the one positive expectation that users had, that the system would save them time, proved not to be the case in practice; ADGAME appears to improve the users' effectiveness rather than their efficiency. © 1995.
Resumo:
Traditional Chinese Medicine (TCM) has been actively researched through various approaches, including computational techniques. A review on basic elements of TCM is provided to illuminate various challenges and progresses in its study using computational methods. Information on various TCM formulations, in particular resources on databases of TCM formulations and their integration to Western medicine, are analyzed in several facets, such as TCM classifications, types of databases, and mining tools. Aspects of computational TCM diagnosis, namely inspection, auscultation, pulse analysis as well as TCM expert systems are reviewed in term of their benefits and drawbacks. Various approaches on exploring relationships among TCM components and finding genes/proteins relating to TCM symptom complex are also studied. This survey provides a summary on the advance of computational approaches for TCM and will be useful for future knowledge discovery in this area. © 2007 Elsevier Ireland Ltd. All rights reserved.
Resumo:
This paper discusses demand and supply chain management and examines how artificial intelligence techniques and RFID technology can enhance the responsiveness of the logistics workflow. This proposed system is expected to have a significant impact on the performance of logistics networks by virtue of its capabilities to adapt unexpected supply and demand changes in the volatile marketplace with the unique feature of responsiveness with the advanced technology, Radio Frequency Identification (RFID). Recent studies have found that RFID and artificial intelligence techniques drive the development of total solution in logistics industry. Apart from tracking the movement of the goods, RFID is able to play an important role to reflect the inventory level of various distribution areas. In today’s globalized industrial environment, the physical logistics operations and the associated flow of information are the essential elements for companies to realize an efficient logistics workflow scenario. Basically, a flexible logistics workflow, which is characterized by its fast responsiveness in dealing with customer requirements through the integration of various value chain activities, is fundamental to leverage business performance of enterprises. The significance of this research is the demonstration of the synergy of using a combination of advanced technologies to form an integrated system that helps achieve lean and agile logistics workflow.
Resumo:
Optimization of design, creation, functioning and accompaniment processes of expert system is the important problem of artificial intelligence theory and decisions making methods techniques. In this paper the approach to its solving with the use of technology, being based on methodology of systems analysis, ontology of subject domain, principles and methods of self-organisation, is offered. The aspects of such approach realization, being based on construction of accordance between the ontology hierarchical structure and sequence of questions in automated systems for examination, are expounded.
Resumo:
The problems of the cognitive development of subject “perception” are discussed in the thesis: from the object being studied and means of action till the single system “subject – modus operandi of subject – object”. Problems of increasing adequacy of models of “live” nature are analyzed. The concept of developing decisionmaking support systems as expert systems to decision-making support systems as personal device of a decisionmaker is discussed. The experience of the development of qualitative prediction on the basis of polyvalent dependences, represented by a decision tree, which realizes the concept of “plural subjective determinism”, is analyzed. The examples of applied systems prediction of ecological-economic and social processes are given. The ways of their development are discussed.
Resumo:
The purpose is to develop expert systems where by-analogy reasoning is used. Knowledge “closeness” problems are known to frequently emerge in such systems if knowledge is represented by different production rules. To determine a degree of closeness for production rules a distance between predicates is introduced. Different types of distances between two predicate value distribution functions are considered when predicates are “true”. Asymptotic features and interrelations of distances are studied. Predicate value distribution functions are found by empirical distribution functions, and a procedure is proposed for this purpose. An adequacy of obtained distribution functions is tested on the basis of the statistical 2 χ –criterion and a testing mechanism is discussed. A theorem, by which a simple procedure of measurement of Euclidean distances between distribution function parameters is substituted for a predicate closeness determination one, is proved for parametric distribution function families. The proposed distance measurement apparatus may be applied in expert systems when reasoning is created by analogy.
Resumo:
The paper presents a study that focuses on the issue of sup-porting educational experts to choose the right combination of educational methodology and technology tools when designing training and learning programs. It is based on research in the field of adaptive intelligent e-learning systems. The object of study is the professional growth of teachers in technology and in particular that part of their qualification which is achieved by organizing targeted training of teachers. The article presents the process of creating and testing a system to support the decision on the design of training for teachers, leading to more effective implementation of technology in education and integration in diverse educational contexts. ACM Computing Classification System (1998): H.4.2, I.2.1, I.2, I.2.4, F.4.1.
Resumo:
Aberrant behavior of biological signaling pathways has been implicated in diseases such as cancers. Therapies have been developed to target proteins in these networks in the hope of curing the illness or bringing about remission. However, identifying targets for drug inhibition that exhibit good therapeutic index has proven to be challenging since signaling pathways have a large number of components and many interconnections such as feedback, crosstalk, and divergence. Unfortunately, some characteristics of these pathways such as redundancy, feedback, and drug resistance reduce the efficacy of single drug target therapy and necessitate the employment of more than one drug to target multiple nodes in the system. However, choosing multiple targets with high therapeutic index poses more challenges since the combinatorial search space could be huge. To cope with the complexity of these systems, computational tools such as ordinary differential equations have been used to successfully model some of these pathways. Regrettably, for building these models, experimentally-measured initial concentrations of the components and rates of reactions are needed which are difficult to obtain, and in very large networks, they may not be available at the moment. Fortunately, there exist other modeling tools, though not as powerful as ordinary differential equations, which do not need the rates and initial conditions to model signaling pathways. Petri net and graph theory are among these tools. In this thesis, we introduce a methodology based on Petri net siphon analysis and graph network centrality measures for identifying prospective targets for single and multiple drug therapies. In this methodology, first, potential targets are identified in the Petri net model of a signaling pathway using siphon analysis. Then, the graph-theoretic centrality measures are employed to prioritize the candidate targets. Also, an algorithm is developed to check whether the candidate targets are able to disable the intended outputs in the graph model of the system or not. We implement structural and dynamical models of ErbB1-Ras-MAPK pathways and use them to assess and evaluate this methodology. The identified drug-targets, single and multiple, correspond to clinically relevant drugs. Overall, the results suggest that this methodology, using siphons and centrality measures, shows promise in identifying and ranking drugs. Since this methodology only uses the structural information of the signaling pathways and does not need initial conditions and dynamical rates, it can be utilized in larger networks.