70 resultados para Intelligent CAD
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.
Resumo:
The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.
Resumo:
We propose to design a Custom Learning System that responds to the unique needs and potentials of individual students, regardless of their location, abilities, attitudes, and circumstances. This project is intentionally provocative and future-looking but it is not unrealistic or unfeasible. We propose that by combining complex learning databases with a learner’s personal data, we could provide all students with a personal, customizable, and flexible education. This paper presents the initial research undertaken for this project of which the main challenges were to broadly map the complex web of data available, to identify what logic models are required to make the data meaningful for learning, and to translate this knowledge into simple and easy-to-use interfaces. The ultimate outcome of this research will be a series of candidate user interfaces and a broad system logic model for a new smart system for personalized learning. This project is student-centered, not techno-centric, aiming to deliver innovative solutions for learners and schools. It is deliberately future-looking, allowing us to ask questions that take us beyond the limitations of today to motivate new demands on technology.
Resumo:
One of the main aims in artificial intelligent system is to develop robust and efficient optimisation methods for Multi-Objective (MO) and Multidisciplinary Design (MDO) design problems. The paper investigates two different optimisation techniques for multi-objective design optimisation problems. The first optimisation method is a Non-Dominated Sorting Genetic Algorithm II (NSGA-II). The second method combines the concepts of Nash-equilibrium and Pareto optimality with Multi-Objective Evolutionary Algorithms (MOEAs) which is denoted as Hybrid-Game. Numerical results from the two approaches are compared in terms of the quality of model and computational expense. The benefit of using the distributed hybrid game methodology for multi-objective design problems is demonstrated.
Resumo:
Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.
Resumo:
In an open railway access market, the provisions of railway infrastructures and train services are separated and independent. Negotiations between the track owner and train service providers are thus required for the allocation of the track capacity and the formulation of the services timetables, in which each party, i.e. a stakeholder, exhibits intelligence from the previous negotiation experience to obtain the favourable terms and conditions for the track access. In order to analyse the realistic interacting behaviour among the stakeholders in the open railway access market schedule negotiations, intelligent learning capability should be included in the behaviour modelling. This paper presents a reinforcement learning approach on modelling the intelligent negotiation behaviour. The effectiveness of incorporating learning capability in the stakeholder negotiation behaviour is then demonstrated through simulation.
Resumo:
Advances in data mining have provided techniques for automatically discovering underlying knowledge and extracting useful information from large volumes of data. Data mining offers tools for quick discovery of relationships, patterns and knowledge in large complex databases. Application of data mining to manufacturing is relatively limited mainly because of complexity of manufacturing data. Growing self organizing map (GSOM) algorithm has been proven to be an efficient algorithm to analyze unsupervised DNA data. However, it produced unsatisfactory clustering when used on some large manufacturing data. In this paper a data mining methodology has been proposed using a GSOM tool which was developed using a modified GSOM algorithm. The proposed method is used to generate clusters for good and faulty products from a manufacturing dataset. The clustering quality (CQ) measure proposed in the paper is used to evaluate the performance of the cluster maps. The paper also proposed an automatic identification of variables to find the most probable causative factor(s) that discriminate between good and faulty product by quickly examining the historical manufacturing data. The proposed method offers the manufacturers to smoothen the production flow and improve the quality of the products. Simulation results on small and large manufacturing data show the effectiveness of the proposed method.
Resumo:
“What did you think you were doing?” Was the question posed by the conference organizers to me as the inventor and constructor of the first working Tangible Interfaces over 40 years ago. I think the question was intended to encourage me to talk about the underlying ideas and intentionality rather than describe an endless sequence of electronic bricks and that is what I shall do in this presentation. In the sixties the prevalent idea for a graphics interface was an analogue with sketching which was to somehow be understood by the computer as three dimensional form. I rebelled against this notion for reasons which I will explain in the presentation and instead came up with tangible physical three dimensional intelligent objects. I called these first prototypes “Intelligent Physical Modelling Systems” which is a really dumb name for an obvious concept. I am eternally grateful to Hiroshi Ishii for coining the term “Tangible User Interfaces” - the same idea but with a much smarter name. Another motivator was user involvement in the design process, and that led to the Generator (1979) project with Cedric Price for the world’s first intelligent building capable of organizing itself in response to the appetites of the users. The working model of that project is in MoMA. And the same motivation led to a self builders design kit (1980) for Walter Segal which facilitated self-builders to design their own houses. And indeed as the organizer’s question implied, the motivation and intentionality of these projects developed over the years in step with advancing technology. The speaker will attempt to articulate these changes with medical, psychological and educational examples. Much of this later work indeed stemming from the Media Lab where we are talking. Related topics such as “tangible thinking” and “intelligent teacups” will be introduced and the presentation will end with some speculations for the future. The presentation will be given against a background of images of early prototypes many of which have never been previously published.
Resumo:
This paper argues a model of open systems evolution based on evolutionary thermodynamics and complex system science, as a design paradigm for sustainable architecture. The mechanism of open system evolution is specified in mathematical simulations and theoretical discourses. According to the mechanism, the authors propose an intelligent building model of sustainable design by a holistic information system of the end-users, the building and nature. This information system is used to control the consumption of energy and material resources in building system at microscopic scale, to adapt the environmental performance of the building system to the natural environment at macroscopic scale, for an evolutionary emergence of sustainable performance of buildings.