58 resultados para Integrated Expert Systems
Resumo:
Results of complementary surveys of foreign and Chinese manufacturing enterprises with respect to their objectives and expectations regarding technology transfer into China show that the major strategic objective of foreign enterprises, to gain access to the Chinese market, fits well with Chinese enterprises’ main objective of improving domestic competitiveness but less well with that of accessing world markets through technology transfer. Foreign firms rate highly the capability of Chinese enterprises to learn new technologies and also find the Chinese macro environment for business favourable. The survey results provide information that will help managers with their negotiations on co-operating with prospective partners for the transfer of technology as well as assisting policy makers who wish to facilitate more effective transfer arrangements.
Resumo:
Financial prediction has attracted a lot of interest due to the financial implications that the accurate prediction of financial markets can have. A variety of data driven modellingapproaches have been applied but their performance has produced mixed results. In this study we apply both parametric (neural networks with active neurons) and nonparametric (analog complexing) self-organisingmodelling methods for the daily prediction of the exchangerate market. We also propose acombinedapproach where the parametric and nonparametricself-organising methods are combined sequentially, exploiting the advantages of the individual methods with the aim of improving their performance. The combined method is found to produce promising results and to outperform the individual methods when tested with two exchangerates: the American Dollar and the Deutche Mark against the British Pound.
Resumo:
Since much knowledge is tacit, eliciting knowledge is a common bottleneck during the development of knowledge-based systems. Visual interactive simulation (VIS) has been proposed as a means for eliciting experts’ decision-making by getting them to interact with a visual simulation of the real system in which they work. In order to explore the effectiveness and efficiency of VIS based knowledge elicitation, an experiment has been carried out with decision-makers in a Ford Motor Company engine assembly plant. The model properties under investigation were the level of visual representation (2-dimensional, 2½-dimensional and 3-dimensional) and the model parameter settings (unadjusted and adjusted to represent more uncommon and extreme situations). The conclusion from the experiment is that using a 2-dimensional representation with adjusted parameter settings provides the better simulation-based means for eliciting knowledge, at least for the case modelled.
Resumo:
The global market has become increasingly dynamic, unpredictable and customer-driven. This has led to rising rates of new product introduction and turbulent demand patterns across product mixes. As a result, manufacturing enterprises were facing mounting challenges to be agile and responsive to cope with market changes, so as to achieve the competitiveness of producing and delivering products to the market timely and cost-effectively. This paper introduces a currency-based iterative agent bidding mechanism to effectively and cost-efficiently integrate the activities associated with production planning and control, so as to achieve an optimised process plan and schedule. The aim is to enhance the agility of manufacturing systems to accommodate dynamic changes in the market and production. The iterative bidding mechanism is executed based on currency-like metrics; each operation to be performed is assigned with a virtual currency value and agents bid for the operation if they make a virtual profit based on this value. These currency values are optimised iteratively and so does the bidding process based on new sets of values. This is aimed at obtaining better and better production plans, leading to near-optimality. A genetic algorithm is proposed to optimise the currency values at each iteration. In this paper, the implementation of the mechanism and the test case simulation results are also discussed. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the ordered weight averaging (OWA) operators of decision-making units (DMUs). OWA was originally introduced by Yager (IEEE Trans Syst Man Cybern 18(1):183-190, 1988) has gained much interest among researchers, hence many applications such as in the areas of decision making, expert systems, data mining, approximate reasoning, fuzzy system and control have been proposed. On the other hand, the SAS is powerful software and it is capable of running various optimization tools such as linear and non-linear programming with all type of constraints. To facilitate the use of OWA operator by SAS users, a code was implemented. The SAS macro developed in this paper selects the criteria and alternatives from a SAS dataset and calculates a set of OWA weights. An example is given to illustrate the features of SAS/OWA software. © Springer-Verlag 2009.
Resumo:
The aims of the project were twofold: 1) To investigate classification procedures for remotely sensed digital data, in order to develop modifications to existing algorithms and propose novel classification procedures; and 2) To investigate and develop algorithms for contextual enhancement of classified imagery in order to increase classification accuracy. The following classifiers were examined: box, decision tree, minimum distance, maximum likelihood. In addition to these the following algorithms were developed during the course of the research: deviant distance, look up table and an automated decision tree classifier using expert systems technology. Clustering techniques for unsupervised classification were also investigated. Contextual enhancements investigated were: mode filters, small area replacement and Wharton's CONAN algorithm. Additionally methods for noise and edge based declassification and contextual reclassification, non-probabilitic relaxation and relaxation based on Markov chain theory were developed. The advantages of per-field classifiers and Geographical Information Systems were investigated. The conclusions presented suggest suitable combinations of classifier and contextual enhancement, given user accuracy requirements and time constraints. These were then tested for validity using a different data set. A brief examination of the utility of the recommended contextual algorithms for reducing the effects of data noise was also carried out.
Resumo:
The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.
Substances hazardous to health:the nature of the expertise associated with competent risk assessment
Resumo:
This research investigated expertise in hazardous substance risk assessment (HSRA). Competent pro-active risk assessment is needed to prevent occupational ill-health caused by hazardous substance exposure occurring in the future. In recent years there has been a strong demand for HSRA expertise and a shortage of expert practitioners. The discipline of Occupational Hygiene was identified as the key repository of knowledge and skills for HSRA and one objective of this research was to develop a method to elicit this expertise from experienced occupational hygienists. In the study of generic expertise, many methods of knowledge elicitation (KE) have been investigated, since this has been relevant to the development of 'expert systems' (thinking computers). Here, knowledge needed to be elicited from human experts, and this stage was often a bottleneck in system development, since experts could not explain the basis of their expertise. At an intermediate stage, information collected was used to structure a basic model of hazardous substance risk assessment activity (HSRA Model B) and this formed the basis of tape transcript analysis in the main study with derivation of a 'classification' and a 'performance matrix'. The study aimed to elicit the expertise of occupational hygienists and compare their performance with other health and safety professionals (occupational health physicians, occupational health nurses, health and safety practitioners and trainee health and safety inspectors), as evaluated using the matrix. As a group, the hygienists performed best in the exercise, and this group were particularly good at process elicitation and at recommending specific control measures, although the other groups also performed well in selected aspects of the matrix and the work provided useful findings and insights. From the research, two models of HSRA have been derived, an HSRA aid, together with a novel videotape KE technique and interesting research findings. The implications of this are discussed with respect to future training of HS professionals and wider application of the videotape KE method.
Towards a web-based progressive handwriting recognition environment for mathematical problem solving
Resumo:
The emergence of pen-based mobile devices such as PDAs and tablet PCs provides a new way to input mathematical expressions to computer by using handwriting which is much more natural and efficient for entering mathematics. This paper proposes a web-based handwriting mathematics system, called WebMath, for supporting mathematical problem solving. The proposed WebMath system is based on client-server architecture. It comprises four major components: a standard web server, handwriting mathematical expression editor, computation engine and web browser with Ajax-based communicator. The handwriting mathematical expression editor adopts a progressive recognition approach for dynamic recognition of handwritten mathematical expressions. The computation engine supports mathematical functions such as algebraic simplification and factorization, and integration and differentiation. The web browser provides a user-friendly interface for accessing the system using advanced Ajax-based communication. In this paper, we describe the different components of the WebMath system and its performance analysis.
Resumo:
Data envelopment analysis (DEA) as introduced by Charnes, Cooper, and Rhodes (1978) is a linear programming technique that has widely been used to evaluate the relative efficiency of a set of homogenous decision making units (DMUs). In many real applications, the input-output variables cannot be precisely measured. This is particularly important in assessing efficiency of DMUs using DEA, since the efficiency score of inefficient DMUs are very sensitive to possible data errors. Hence, several approaches have been proposed to deal with imprecise data. Perhaps the most popular fuzzy DEA model is based on a-cut. One drawback of the a-cut approach is that it cannot include all information about uncertainty. This paper aims to introduce an alternative linear programming model that can include some uncertainty information from the intervals within the a-cut approach. We introduce the concept of "local a-level" to develop a multi-objective linear programming to measure the efficiency of DMUs under uncertainty. An example is given to illustrate the use of this method.
Resumo:
In many real applications of Data Envelopment Analysis (DEA), the decision makers have to deteriorate some inputs and some outputs. This could be because of limitation of funds available. This paper proposes a new DEA-based approach to determine highest possible reduction in the concern input variables and lowest possible deterioration in the concern output variables without reducing the efficiency in any DMU. A numerical example is used to illustrate the problem. An application in banking sector with limitation of IT investment shows the usefulness of the proposed method. © 2010 Elsevier Ltd. All rights reserved.
Resumo:
When a query is passed to multiple search engines, each search engine returns a ranked list of documents. Researchers have demonstrated that combining results, in the form of a "metasearch engine", produces a significant improvement in coverage and search effectiveness. This paper proposes a linear programming mathematical model for optimizing the ranked list result of a given group of Web search engines for an issued query. An application with a numerical illustration shows the advantages of the proposed method. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Data Envelopment Analysis (DEA) is recognized as a modern approach to the assessment of performance of a set of homogeneous Decision Making Units (DMUs) that use similar sources to produce similar outputs. While DEA commonly is used with precise data, recently several approaches are introduced for evaluating DMUs with uncertain data. In the existing approaches many information on uncertainties are lost. For example in the defuzzification, the a-level and fuzzy ranking approaches are not considered. In the tolerance approach the inequality or equality signs are fuzzified but the fuzzy coefficients (inputs and outputs) are not treated directly. The purpose of this paper is to develop a new model to evaluate DMUs under uncertainty using Fuzzy DEA and to include a-level to the model under fuzzy environment. An example is given to illustrate this method in details.