24 resultados para Engineering, Industrial|Artificial Intelligence
em University of Queensland eSpace - Australia
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.
Resumo:
Fault diagnosis has become an important component in intelligent systems, such as intelligent control systems and intelligent eLearning systems. Reiter's diagnosis theory, described by first-order sentences, has been attracting much attention in this field. However, descriptions and observations of most real-world situations are related to fuzziness because of the incompleteness and the uncertainty of knowledge, e. g., the fault diagnosis of student behaviors in the eLearning processes. In this paper, an extension of Reiter's consistency-based diagnosis methodology, Fuzzy Diagnosis, has been proposed, which is able to deal with incomplete or fuzzy knowledge. A number of important properties of the Fuzzy diagnoses schemes have also been established. The computing of fuzzy diagnoses is mapped to solving a system of inequalities. Some special cases, abstracted from real-world situations, have been discussed. In particular, the fuzzy diagnosis problem, in which fuzzy observations are represented by clause-style fuzzy theories, has been presented and its solving method has also been given. A student fault diagnostic problem abstracted from a simplified real-world eLearning case is described to demonstrate the application of our diagnostic framework.
Resumo:
This paper highlights the importance of design expertise, for designing liquid retaining structures, including subjective judgments and professional experience. Design of liquid retaining structures has special features different from the others. Being more vulnerable to corrosion problem, they have stringent requirements against serviceability limit state of crack. It is the premise of the study to transferring expert knowledge in a computerized blackboard system. Hybrid knowledge representation schemes, including production rules, object-oriented programming, and procedural methods, are employed to express engineering heuristics and standard design knowledge during the development of the knowledge-based system (KBS) for design of liquid retaining structures. This approach renders it possible to take advantages of the characteristics of each method. The system can provide the user with advice on preliminary design, loading specification, optimized configuration selection and detailed design analysis of liquid retaining structure. It would be beneficial to the field of retaining structure design by focusing on the acquisition and organization of expert knowledge through the development of recent artificial intelligence technology. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The design of liquid-retaining structures involves many decisions to be made by the designer based on rules of thumb, heuristics, judgement, codes of practice and previous experience. Structural design problems are often ill structured and there is a need to develop programming environments that can incorporate engineering judgement along with algorithmic tools. Recent developments in artificial intelligence have made it possible to develop an expert system that can provide expert advice to the user in the selection of design criteria and design parameters. This paper introduces the development of an expert system in the design of liquid-retaining structures using blackboard architecture. An expert system shell, Visual Rule Studio, is employed to facilitate the development of this prototype system. It is a coupled system combining symbolic processing with traditional numerical processing. The expert system developed is based on British Standards Code of Practice BS8007. Explanations are made to assist inexperienced designers or civil engineering students to learn how to design liquid-retaining structures effectively and sustainably in their design practices. The use of this expert system in disseminating heuristic knowledge and experience to practitioners and engineering students is discussed.
Resumo:
Enterprise systems interoperability (ESI) is an important topic for business currently. This situation is evidenced, at least in part, by the number and extent of potential candidate protocols for such process interoperation, viz., ebXML, BPML, BPEL, and WSCI. Wide-ranging support for each of these candidate standards already exists. However, despite broad acceptance, a sound theoretical evaluation of these approaches has not yet been provided. We use the Bunge-Wand-Weber (BWW) models, in particular, the representation model, to provide the basis for such a theoretical evaluation. We, and other researchers, have shown the usefulness of the representation model for analyzing, evaluating, and engineering techniques in the areas of traditional and structured systems analysis, object-oriented modeling, and process modeling. In this work, we address the question, what are the potential semantic weaknesses of using ebXML alone for process interoperation between enterprise systems? We find that users will lack important implementation information because of representational deficiencies; due to ontological redundancy, the complexity of the specification is unnecessarily increased; and, users of the specification will have to bring in extra-model knowledge to understand constructs in the specification due to instances of ontological excess.
Resumo:
The Virtual Learning Environment (VLE) is one of the fastest growing areas in educational technology research and development. In order to achieve learning effectiveness, ideal VLEs should be able to identify learning needs and customize solutions, with or without an instructor to supplement instruction. They are called Personalized VLEs (PVLEs). In order to achieve PVLEs success, comprehensive conceptual models corresponding to PVLEs are essential. Such conceptual modeling development is important because it facilitates early detection and correction of system development errors. Therefore, in order to capture the PVLEs knowledge explicitly, this paper focuses on the development of conceptual models for PVLEs, including models of knowledge primitives in terms of learner, curriculum, and situational models, models of VLEs in general pedagogical bases, and particularly, the definition of the ontology of PVLEs on the constructivist pedagogical principle. Based on those comprehensive conceptual models, a prototyped multiagent-based PVLE has been implemented. A field experiment was conducted to investigate the learning achievements by comparing personalized and non-personalized systems. The result indicates that the PVLE we developed under our comprehensive ontology successfully provides significant learning achievements. These comprehensive models also provide a solid knowledge representation framework for PVLEs development practice, guiding the analysis, design, and development of PVLEs. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Single shortest path extraction algorithms have been used in a number of areas such as network flow and image analysis. In image analysis, shortest path techniques can be used for object boundary detection, crack detection, or stereo disparity estimation. Sometimes one needs to find multiple paths as opposed to a single path in a network or an image where the paths must satisfy certain constraints. In this paper, we propose a new algorithm to extract multiple paths simultaneously within an image using a constrained expanded trellis (CET) for feature extraction and object segmentation. We also give a number of application examples for our multiple paths extraction algorithm.
Resumo:
A biologically realizable, unsupervised learning rule is described for the online extraction of object features, suitable for solving a range of object recognition tasks. Alterations to the basic learning rule are proposed which allow the rule to better suit the parameters of a given input space. One negative consequence of such modifications is the potential for learning instability. The criteria for such instability are modeled using digital filtering techniques and predicted regions of stability and instability tested. The result is a family of learning rules which can be tailored to the specific environment, improving both convergence times and accuracy over the standard learning rule, while simultaneously insuring learning stability.
Resumo:
Beyond the inherent technical challenges, current research into the three dimensional surface correspondence problem is hampered by a lack of uniform terminology, an abundance of application specific algorithms, and the absence of a consistent model for comparing existing approaches and developing new ones. This paper addresses these challenges by presenting a framework for analysing, comparing, developing, and implementing surface correspondence algorithms. The framework uses five distinct stages to establish correspondence between surfaces. It is general, encompassing a wide variety of existing techniques, and flexible, facilitating the synthesis of new correspondence algorithms. This paper presents a review of existing surface correspondence algorithms, and shows how they fit into the correspondence framework. It also shows how the framework can be used to analyse and compare existing algorithms and develop new algorithms using the framework's modular structure. Six algorithms, four existing and two new, are implemented using the framework. Each implemented algorithm is used to match a number of surface pairs. Results demonstrate that the correspondence framework implementations are faithful implementations of existing algorithms, and that powerful new surface correspondence algorithms can be created. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Power systems are large scale nonlinear systems with high complexity. Various optimization techniques and expert systems have been used in power system planning. However, there are always some factors that cannot be quantified, modeled, or even expressed by expert systems. Moreover, such planning problems are often large scale optimization problems. Although computational algorithms that are capable of handling large dimensional problems can be used, the computational costs are still very high. To solve these problems, in this paper, investigation is made to explore the efficiency and effectiveness of combining mathematic algorithms with human intelligence. It had been discovered that humans can join the decision making progresses by cognitive feedback. Based on cognitive feedback and genetic algorithm, a new algorithm called cognitive genetic algorithm is presented. This algorithm can clarify and extract human's cognition. As an important application of this cognitive genetic algorithm, a practical decision method for power distribution system planning is proposed. By using this decision method, the optimal results that satisfy human expertise can be obtained and the limitations of human experts can be minimized in the mean time.
Resumo:
Information about the world is often represented in the brain in the form of topographic maps. A paradigm example is the topographic representation of the visual world in the optic tectum/superior colliculus. This map initially forms during neural development using activity-independent molecular cues, most notably some type of chemospecific matching between molecular gradients in the retina and corresponding gradients in the tectum/superior colliculus. Exactly how this process might work has been studied both experimentally and theoretically for several decades. This review discusses the experimental data briefly, and then in more detail the theoretical models proposed. The principal conclusions are that (1) theoretical models have helped clarify several important ideas in the field, (2) earlier models were often more sophisticated than more recent models, and (3) substantial revisions to current modelling approaches are probably required to account for more than isolated subsets of the experimental data.
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Multiresolution Triangular Mesh (MTM) models are widely used to improve the performance of large terrain visualization by replacing the original model with a simplified one. MTM models, which consist of both original and simplified data, are commonly stored in spatial database systems due to their size. The relatively slow access speed of disks makes data retrieval the bottleneck of such terrain visualization systems. Existing spatial access methods proposed to address this problem rely on main-memory MTM models, which leads to significant overhead during query processing. In this paper, we approach the problem from a new perspective and propose a novel MTM called direct mesh that is designed specifically for secondary storage. It supports available indexing methods natively and requires no modification to MTM structure. Experiment results, which are based on two real-world data sets, show an average performance improvement of 5-10 times over the existing methods.