53 resultados para 080105 Expert Systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper discusses demand and supply chain management and examines how artificial intelligence techniques and RFID technology can enhance the responsiveness of the logistics workflow. This proposed system is expected to have a significant impact on the performance of logistics networks by virtue of its capabilities to adapt unexpected supply and demand changes in the volatile marketplace with the unique feature of responsiveness with the advanced technology, Radio Frequency Identification (RFID). Recent studies have found that RFID and artificial intelligence techniques drive the development of total solution in logistics industry. Apart from tracking the movement of the goods, RFID is able to play an important role to reflect the inventory level of various distribution areas. In today’s globalized industrial environment, the physical logistics operations and the associated flow of information are the essential elements for companies to realize an efficient logistics workflow scenario. Basically, a flexible logistics workflow, which is characterized by its fast responsiveness in dealing with customer requirements through the integration of various value chain activities, is fundamental to leverage business performance of enterprises. The significance of this research is the demonstration of the synergy of using a combination of advanced technologies to form an integrated system that helps achieve lean and agile logistics workflow.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes how the statistical technique of cluster analysis and the machine learning technique of rule induction can be combined to explore a database. The ways in which such an approach alleviates the problems associated with other techniques for data analysis are discussed. We report the results of experiments carried out on a database from the medical diagnosis domain. Finally we describe the future developments which we plan to carry out to build on our current work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The major aim of this research is benchmarking top Arab banks using Data Envelopment Analysis (DEA) technique and to compare the results with that of published recently in Mostafa (2007a,b) [Mostafa, M. M. (2007a). Modeling the efficiency of top Arab banks: A DEA–neural network approach. Expert Systems with Applications, doi:10.1016/j.eswa.2007.09.001; Mostafa M. M. (2007b), Benchmarking top Arab banks’ efficiency through efficient frontier analysis, Industrial Management & Data Systems, 107(6) 802–823]. Data for 85 Arab banks used to conduct the analysis of relative efficiency. Our findings indicate that (1) the efficiency of Arab banks reported in Mostafa (2007a,b) is incorrect, hence, readers should take extra caution of using such results, (2) the corrected efficiency scores suggest that there is potential for significant improvements in Arab banks. In summary, this study overcomes with some data and methodology issues in measuring efficiency of Arab banks and highlights the importance of encouraging increased efficiency throughout the banking industry in the Arab world using the new results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Group decision making is the study of identifying and selecting alternatives based on the values and preferences of the decision maker. Making a decision implies that there are several alternative choices to be considered. This paper uses the concept of Data Envelopment Analysis to introduce a new mathematical method for selecting the best alternative in a group decision making environment. The introduced model is a multi-objective function which is converted into a multi-objective linear programming model from which the optimal solution is obtained. A numerical example shows how the new model can be applied to rank the alternatives or to choose a subset of the most promising alternatives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper formulates several mathematical models for determining the optimal sequence of component placements and assignment of component types to feeders simultaneously or the integrated scheduling problem for a type of surface mount technology placement machines, called the sequential pick-andplace (PAP) machine. A PAP machine has multiple stationary feeders storing components, a stationary working table holding a printed circuit board (PCB), and a movable placement head to pick up components from feeders and place them to a board. The objective of integrated problem is to minimize the total distance traveled by the placement head. Two integer nonlinear programming models are formulated first. Then, each of them is equivalently converted into an integer linear type. The models for the integrated problem are verified by two commercial packages. In addition, a hybrid genetic algorithm previously developed by the authors is adopted to solve the models. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total traveling distance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to survive in the increasingly customer-oriented marketplace, continuous quality improvement marks the fastest growing quality organization’s success. In recent years, attention has been focused on intelligent systems which have shown great promise in supporting quality control. However, only a small number of the currently used systems are reported to be operating effectively because they are designed to maintain a quality level within the specified process, rather than to focus on cooperation within the production workflow. This paper proposes an intelligent system with a newly designed algorithm and the universal process data exchange standard to overcome the challenges of demanding customers who seek high-quality and low-cost products. The intelligent quality management system is equipped with the ‘‘distributed process mining” feature to provide all levels of employees with the ability to understand the relationships between processes, especially when any aspect of the process is going to degrade or fail. An example of generalized fuzzy association rules are applied in manufacturing sector to demonstrate how the proposed iterative process mining algorithm finds the relationships between distributed process parameters and the presence of quality problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Financial prediction has attracted a lot of interest due to the financial implications that the accurate prediction of financial markets can have. A variety of data driven modellingapproaches have been applied but their performance has produced mixed results. In this study we apply both parametric (neural networks with active neurons) and nonparametric (analog complexing) self-organisingmodelling methods for the daily prediction of the exchangerate market. We also propose acombinedapproach where the parametric and nonparametricself-organising methods are combined sequentially, exploiting the advantages of the individual methods with the aim of improving their performance. The combined method is found to produce promising results and to outperform the individual methods when tested with two exchangerates: the American Dollar and the Deutche Mark against the British Pound.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since much knowledge is tacit, eliciting knowledge is a common bottleneck during the development of knowledge-based systems. Visual interactive simulation (VIS) has been proposed as a means for eliciting experts’ decision-making by getting them to interact with a visual simulation of the real system in which they work. In order to explore the effectiveness and efficiency of VIS based knowledge elicitation, an experiment has been carried out with decision-makers in a Ford Motor Company engine assembly plant. The model properties under investigation were the level of visual representation (2-dimensional, 2½-dimensional and 3-dimensional) and the model parameter settings (unadjusted and adjusted to represent more uncommon and extreme situations). The conclusion from the experiment is that using a 2-dimensional representation with adjusted parameter settings provides the better simulation-based means for eliciting knowledge, at least for the case modelled.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The global market has become increasingly dynamic, unpredictable and customer-driven. This has led to rising rates of new product introduction and turbulent demand patterns across product mixes. As a result, manufacturing enterprises were facing mounting challenges to be agile and responsive to cope with market changes, so as to achieve the competitiveness of producing and delivering products to the market timely and cost-effectively. This paper introduces a currency-based iterative agent bidding mechanism to effectively and cost-efficiently integrate the activities associated with production planning and control, so as to achieve an optimised process plan and schedule. The aim is to enhance the agility of manufacturing systems to accommodate dynamic changes in the market and production. The iterative bidding mechanism is executed based on currency-like metrics; each operation to be performed is assigned with a virtual currency value and agents bid for the operation if they make a virtual profit based on this value. These currency values are optimised iteratively and so does the bidding process based on new sets of values. This is aimed at obtaining better and better production plans, leading to near-optimality. A genetic algorithm is proposed to optimise the currency values at each iteration. In this paper, the implementation of the mechanism and the test case simulation results are also discussed. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computer integrated manufacture has brought about great advances in manufacturing technology and its recognition is world wide. Cold roll forming of thin-walled sections, and in particular the design and manufacture of form-rolls, the special tooling used in the cold roll forming process, is but one such area where computer integrated manufacture can make a positive contribution. The work reported in this thesis, concerned with the development of an integrated manufacturing system for assisting the design and manufacture of form-rolls, was undertaken in collaboration with a leading manufacturer of thin-walled sections. A suit of computer programs, written in FORTRAN 77, have been developed to provide computer aids for every aspect of work in form-roll design and manufacture including cost estimation and stock control aids. The first phase of the development programme dealt with the establishment of CAD facilities for form-roll design, comprising the design of the finished section, the flower pattern, the roll design and the interactive roll editor program. Concerning the CAM facilities, dealt with in the second phase, an expert system roll machining processor and a general post-processor have been developed for considering the roll geometry and automatically generating NC tape programs for any required CNC lathe system. These programs have been successfully implemented, as an integrated manufacturing software system, on the VAX 11/750 super-minicomputer with graphics facilities for displaying drawings interactively on the terminal screen. The development of the integrated system has been found beneficial in all aspects of form-roll design and manufacture. Design and manufacturing lead times have been reduced by several weeks, quality has improved considerably and productivity has increased. The work has also demonstrated the promising nature of the expert systems approach to computer integrated manufacture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper explores the use of the optimization procedures in SAS/OR software with application to the ordered weight averaging (OWA) operators of decision-making units (DMUs). OWA was originally introduced by Yager (IEEE Trans Syst Man Cybern 18(1):183-190, 1988) has gained much interest among researchers, hence many applications such as in the areas of decision making, expert systems, data mining, approximate reasoning, fuzzy system and control have been proposed. On the other hand, the SAS is powerful software and it is capable of running various optimization tools such as linear and non-linear programming with all type of constraints. To facilitate the use of OWA operator by SAS users, a code was implemented. The SAS macro developed in this paper selects the criteria and alternatives from a SAS dataset and calculates a set of OWA weights. An example is given to illustrate the features of SAS/OWA software. © Springer-Verlag 2009.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aims of the project were twofold: 1) To investigate classification procedures for remotely sensed digital data, in order to develop modifications to existing algorithms and propose novel classification procedures; and 2) To investigate and develop algorithms for contextual enhancement of classified imagery in order to increase classification accuracy. The following classifiers were examined: box, decision tree, minimum distance, maximum likelihood. In addition to these the following algorithms were developed during the course of the research: deviant distance, look up table and an automated decision tree classifier using expert systems technology. Clustering techniques for unsupervised classification were also investigated. Contextual enhancements investigated were: mode filters, small area replacement and Wharton's CONAN algorithm. Additionally methods for noise and edge based declassification and contextual reclassification, non-probabilitic relaxation and relaxation based on Markov chain theory were developed. The advantages of per-field classifiers and Geographical Information Systems were investigated. The conclusions presented suggest suitable combinations of classifier and contextual enhancement, given user accuracy requirements and time constraints. These were then tested for validity using a different data set. A brief examination of the utility of the recommended contextual algorithms for reducing the effects of data noise was also carried out.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The primary objective of this research was to understand what kinds of knowledge and skills people use in `extracting' relevant information from text and to assess the extent to which expert systems techniques could be applied to automate the process of abstracting. The approach adopted in this thesis is based on research in cognitive science, information science, psycholinguistics and textlinguistics. The study addressed the significance of domain knowledge and heuristic rules by developing an information extraction system, called INFORMEX. This system, which was implemented partly in SPITBOL, and partly in PROLOG, used a set of heuristic rules to analyse five scientific papers of expository type, to interpret the content in relation to the key abstract elements and to extract a set of sentences recognised as relevant for abstracting purposes. The analysis of these extracts revealed that an adequate abstract could be generated. Furthermore, INFORMEX showed that a rule based system was a suitable computational model to represent experts' knowledge and strategies. This computational technique provided the basis for a new approach to the modelling of cognition. It showed how experts tackle the task of abstracting by integrating formal knowledge as well as experiential learning. This thesis demonstrated that empirical and theoretical knowledge can be effectively combined in expert systems technology to provide a valuable starting approach to automatic abstracting.