928 resultados para control engineering computing
Resumo:
The purpose of the work reported here was to investigate the application of neural control to a common industrial process. The chosen problem was the control of a batch distillation. In the first phase towards deployment, a complex software simulation of the process was controlled. Initially, the plant was modelled with a neural emulator. The neural emulator was used to train a neural controller using the backpropagation through time algorithm. A high accuracy was achieved with the emulator after a large number of training epochs. The controller converged more rapidly, but its performance varied more widely over its operating range. However, the controlled system was relatively robust to changes in ambient conditions.
Resumo:
We introduce a novel inversion-based neuro-controller for solving control problems involving uncertain nonlinear systems that could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. In this work a novel robust inverse control approach is obtained based on importance sampling from these distributions. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The performance of the new algorithm is illustrated through simulations with example systems.
Resumo:
In an exploding and fluctuating construction market, managers are facing a challenge, which is how to manage business on a wider scale and to utilize modern developments in information technology to promote productivity. The extraordinary development of telecommunications and computer technology makes it possible for people to plan, lead, control, organize and manage projects from a distance without the need to be on site on a daily basis. A modern management known as distance management (DM) or remote management is emerging. Physical distance no longer determines the boundary of management since managers can now operate projects through virtual teams that organize manpower, material and production without face-to-face communication. What organization prototype could overcome psychological and physical barriers to reengineer a successful project through information technology? What criteria distinguishes the adapted way of communication of individual activities in a teamwork and assist the integration of an efficient and effective communication between face-to-face and a physical distance? The entire methodology has been explained through a case application on refuse incineration plant projects in Taiwan.
Resumo:
In this paper we explore the practical use of neural networks for controlling complex non-linear systems. The system used to demonstrate this approach is a simulation of a gas turbine engine typical of those used to power commercial aircraft. The novelty of the work lies in the requirement for multiple controllers which are used to maintain system variables in safe operating regions as well as governing the engine thrust.
Resumo:
The work presented in this thesis describes an investigation into the production and properties of thin amorphous C films, with and without Cr doping, as a low wear / friction coating applicable to MEMS and other micro- and nano-engineering applications. Firstly, an assessment was made of the available testing techniques. Secondly, the optimised test methods were applied to a series of sputtered films of thickness 10 - 2000 nm in order to: (i) investigate the effect of thickness on the properties of coatingslcoating process (ii) investigate fundamental tribology at the nano-scale and (iii) provide a starting point for nanotribological coating optimisation at ultra low thickness. The use of XPS was investigated for the determination of Sp3/Sp2 carbon bonding. Under C 1s peak analysis, significant errors were identified and this was attributed to the absence of sufficient instrument resolution to guide the component peak structure (even with a high resolution instrument). A simple peak width analysis and correlation work with C KLL D value confirmed the errors. The use of XPS for Sp3/Sp2 was therefore limited to initial tentative estimations. Nanoindentation was shown to provide consistent hardness and reduced modulus results with depth (to < 7nm) when replicate data was suitably statistically processed. No significant pile-up or cracking of the films was identified under nanoindentation. Nanowear experimentation by multiple nanoscratching provided some useful information, however the conditions of test were very different to those expect for MEMS and micro- / nano-engineering systems. A novel 'sample oscillated nanoindentation' system was developed for testing nanowear under more relevant conditions. The films were produced in an industrial production coating line. In order to maximise the available information and to take account of uncontrolled process variation a statistical design of experiment procedure was used to investigate the effect of four key process control parameters. Cr doping was the most significant control parameter at all thicknesses tested and produced a softening effect and thus increased nanowear. Substrate bias voltage was also a significant parameter and produced hardening and a wear reducing effect at all thicknesses tested. The use of a Cr adhesion layer produced beneficial results at 150 nm thickness, but was ineffective at 50 nm. Argon flow to the coating chamber produced a complex effect. All effects reduced significantly with reducing film thickness. Classic fretting wear was produced at low amplitude under nanowear testing. Reciprocating sliding was produced at higher amplitude which generated three body abrasive wear and this was generally consistent with the Archard model. Specific wear rates were very low (typically 10-16 - 10-18 m3N-1m-1). Wear rates reduced exponentially with reduced film thickness and below (approx.) 20 nm, thickness was identified as the most important control of wear.
Resumo:
The success of mainstream computing is largely due to the widespread availability of general-purpose architectures and of generic approaches that can be used to solve real-world problems cost-effectively and across a broad range of application domains. In this chapter, we propose that a similar generic framework is used to make the development of autonomic solutions cost effective, and to establish autonomic computing as a major approach to managing the complexity of today’s large-scale systems and systems of systems. To demonstrate the feasibility of general-purpose autonomic computing, we introduce a generic autonomic computing framework comprising a policy-based autonomic architecture and a novel four-step method for the effective development of self-managing systems. A prototype implementation of the reconfigurable policy engine at the core of our architecture is then used to develop autonomic solutions for case studies from several application domains. Looking into the future, we describe a methodology for the engineering of self-managing systems that extends and generalises our autonomic computing framework further.
Resumo:
This paper investigates how existing software engineering techniques can be employed, adapted and integrated for the development of systems of systems. Starting from existing system-of-systems (SoS) studies, we identify computing paradigms and techniques that have the potential to help address the challenges associated with SoS development, and propose an SoS development framework that combines these techniques in a novel way. This framework addresses the development of a class of IT systems of systems characterised by high variability in the types of interactions between their component systems, and by relatively small numbers of such interactions. We describe how the framework supports the dynamic, automated generation of the system interfaces required to achieve these interactions, and present a case study illustrating the development of a data-centre SoS using the new framework.
Resumo:
In construction projects, the aim of project control is to ensure projects finish on time, within budget, and achieve other project objectives. During the last few decades, numerous project control methods have been developed and adopted by project managers in practice. However, many existing methods focus on describing what the processes and tasks of project control are; not on how these tasks should be conducted. There is also a potential gap between principles that underly these methods and project control practice. As a result, time and cost overruns are still common in construction projects, partly attributable to deficiencies of existing project control methods and difficulties in implementing them. This paper describes a new project cost and time control model, the project control and inhibiting factors management (PCIM) model, developed through a study involving extensive interaction with construction practitioners in the UK, which better reflects the real needs of project managers. A set of good practice checklist is also developed to facilitate implementation of the model. © 2013 American Society of Civil Engineers.
Resumo:
This research thesis is concerned with the human factors aspects of industrial alarm systems within human supervisory control tasks. Typically such systems are located in central control rooms, and the information may be presented via visual display units. The thesis develops a human, rather than engineering, centred approach to the assessment, measurement and analysis of the situation. A human factors methodology was employed to investigate the human requirements through: interviews, questionnaires, observation and controlled experiments. Based on the analysis of current industrial alarm systems in a variety of domains (power generation, manufacturing and coronary care), it is suggested that often designers do not pay due considerations to the human requirements. It is suggested that most alarm systems have severe shortcomings in human factors terms. The interviews, questionnaire and observations led to the proposal of 'alarm initiated activities' as a framework for the research to proceed. The framework comprises of six main stages: observe, accept, analyse, investigate, correct and monitor. This framework served as a basis for laboratory research into alarm media. Under consideration were speech-based alarm displays and visual alarm displays. Non-speech auditory displays were the subject of a literature review. The findings suggest that care needs to be taken when selecting the alarm media. Ideally it should be chosen to support the task requirements of the operator, rather than being arbitrarily assigned. It was also indicated that there may be some interference between the alarm initiated activities and the alarm media, i.e. information that supports one particular stage of alarm handling may interfere with another.
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Resumo:
The present study describes a pragmatic approach to the implementation of production planning and scheduling techniques in foundries of all types and looks at the use of `state-of-the-art' management control and information systems. Following a review of systems for the classification of manufacturing companies, a definitive statement is made which highlights the important differences between foundries (i.e. `component makers') and other manufacturing companies (i.e. `component buyers'). An investigation of the manual procedures which are used to plan and control the manufacture of components reveals the inherent problems facing foundry production management staff, which suggests the unsuitability of many manufacturing techniques which have been applied to general engineering companies. From the literature it was discovered that computer-assisted systems are required which are primarily `information-based' rather than `decision based', whilst the availability of low-cost computers and `packaged-software' has enabled foundries to `get their feet wet' without the financial penalties which characterized many of the early attempts at computer-assistance (i.e. pre-1980). Moreover, no evidence of a single methodology for foundry scheduling emerged from the review. A philosophy for the development of a CAPM system is presented, which details the essential information requirements and puts forward proposals for the subsequent interactions between types of information and the sub-system of CAPM which they support. The work developed was oriented specifically at the functions of production planning and scheduling and introduces the concept of `manual interaction' for effective scheduling. The techniques developed were designed to use the information which is readily available in foundries and were found to be practically successful following the implementation of the techniques into a wide variety of foundries. The limitations of the techniques developed are subsequently discussed within the wider issues which form a CAPM system, prior to a presentation of the conclusions which can be drawn from the study.
Resumo:
Investigation of the different approaches used by Expert Systems researchers to solve problems in the domain of Mechanical Design and Expert Systems was carried out. The techniques used for conventional formal logic programming were compared with those used when applying Expert Systems concepts. A literature survey of design processes was also conducted with a view to adopting a suitable model of the design process. A model, comprising a variation on two established ones, was developed and applied to a problem within what are described as class 3 design tasks. The research explored the application of these concepts to Mechanical Engineering Design problems and their implementation on a microcomputer using an Expert System building tool. It was necessary to explore the use of Expert Systems in this manner so as to bridge the gap between their use as a control structure and for detailed analytical design. The former application is well researched into and this thesis discusses the latter. Some Expert System building tools available to the author at the beginning of his work were evaluated specifically for their suitability for Mechanical Engineering design problems. Microsynics was found to be the most suitable on which to implement a design problem because of its simple but powerful Semantic Net Knowledge Representation structure and the ability to use other types of representation schemes. Two major implementations were carried out. The first involved a design program for a Helical compression spring and the second a gearpair system design. Two concepts were proposed in the thesis for the modelling and implementation of design systems involving many equations. The method proposed enables equation manipulation and analysis using a combination of frames, semantic nets and production rules. The use of semantic nets for purposes other than for psychology and natural language interpretation, is quite new and represents one of the major contributions to knowledge by the author. The development of a purpose built shell program for this type of design problems was recommended as an extension of the research. Microsynics may usefully be used as a platform for this development.
Resumo:
This research primarily focused on identifying the formulation parameters which control the efficacy of liposomes as delivery systems to enhance the delivery of poorly soluble drugs. Preliminary studies focused on the drug loading of ibuprofen within vesicle systems. Initially both liposomal and niosomal formulations were screened for their drug-loading capacity: liposomal systems were shown to offer significantly higher ibuprofen loading and thereafter lipid based systems were further investigated. Given the key role cholesterol is known to play within the stability of bilayer vesicles. the optimum cholesterol content in terms of drug loading and release of poorly soluble drugs was then investigated. From these studies a concentration of 11 total molar % of cholesterol was used as a benchmark for all further formulations. Investigating the effect of liposomc composition on several low solubility drugs, drug loading was shown to be enhanced by adopting longer chain length lipids. cationic lipids and. decreasing drug molecular weight. Drug release was increased by using cationic lipids and lower molecular weight of drug; conversely, a reduction was noted when employing longer chain lipids thus supporting the rational of longer chain lipids producing more stable liposomes, a theory also supported by results obtained via Langmuir studies· although it was revealed that stability is also dependent on geometric features associated with the lipid chain moiety. Interestingly, reduction in drug loading appeared to be induced when symmetrical phospholipids were substituted for lipids constituting asymmetrical alkyl chain groups thus further highlighting the importance of lipid geometry. Combining a symmetrical lipid with an asymmetrical derivative enhanced encapsulation of a hydrophobic drug while reducing that of another suggesting the importance of drug characteristics. Phosphatidylcholine liposornes could successfully be prepared (and visualised using transmission electron microscopy) from fatty alcohols therefore offering an alternative liposomal stabiliser to cholesterol. Results obtained revealed that liposomes containing tetradecanol within their formulation shares similar vesicle size, drug encapsulation, surface charge. and toxicity profiles as liposomes formulated with cholesterol, however the tetradecanol preparation appeared to release considerably more drug during stability studies. Langmuir monolayer studies revealed that the condensing influence by tetradecanol is less than compared with cholesterol suggesting that this reduced intercalation by the former could explain why the tetradecanol formulation released more drug compared with cholesterol formulations. Environmental scanning electron microscopy (ESEM) was used to analyse the morphology and stability of liposomes. These investigations indicated that the presence of drugs within the liposomal bilayer were able to enhance the stability of the bilayers against collapse under reduced hydration conditions. In addition the presence of charged lipids within the formulation under reduced hydration conditions compared with its neutral counterpart. However the applicability of using ESEM as a new method to investigate liposome stability appears less valid than first hoped since the results are often open to varied interpretation and do not provide a robust set of data to support conclusions in some cases.
Resumo:
The principle theme of this thesis is the advancement and expansion of ophthalmic research via the collaboration between professional Engineers and professional Optometrists. The aim has been to develop new and novel approaches and solutions to contemporary problems in the field. The work is sub divided into three areas of investigation; 1) High technology systems, 2) Modification of current systems to increase functionality, and 3) Development of smaller more portable and cost effective systems. High Technology Systems: A novel high speed Optical Coherence Tomography (OCT) system with integrated simultaneous high speed photography was developed achieving better operational speed than is currently available commercially. The mechanical design of the system featured a novel 8 axis alignment system. A full set of capture, analysis, and post processing software was developed providing custom analysis systems for ophthalmic OCT imaging, expanding the current capabilities of the technology. A large clinical trial was undertaken to test the dynamics of contact lens edge interaction with the cornea in-vivo. The interaction between lens edge design, lens base curvature, post insertion times and edge positions was investigated. A novel method for correction of optical distortion when assessing lens indentation was also demonstrated. Modification of Current Systems: A commercial autorefractor, the WAM-5500, was modified with the addition of extra hardware and a custom software and firmware solution to produce a system that was capable of measuring dynamic accommodative response to various stimuli in real time. A novel software package to control the data capture process was developed allowing real time monitoring of data by the practitioner, adding considerable functionality of the instrument further to the standard system. The device was used to assess the accommodative response differences between subjects who had worn UV blocking contact lens for 5 years, verses a control group that had not worn UV blocking lenses. While the standard static measurement of accommodation showed no differences between the two groups, it was determined that the UV blocking group did show better accommodative rise and fall times (faster), thus demonstrating the benefits of the modification of this commercially available instrumentation. Portable and Cost effective Systems: A new instrument was developed to expand the capability of the now defunct Keeler Tearscope. A device was developed that provided a similar capability in allowing observation of the reflected mires from the tear film surface, but with the added advantage of being able to record the observations. The device was tested comparatively with the tearscope and other tear film break-up techniques, demonstrating its potential. In Conclusion: This work has successfully demonstrated the advantages of interdisciplinary research between engineering and ophthalmic research has provided new and novel instrumented solutions as well as having added to the sum of scientific understanding in the ophthalmic field.
Resumo:
Self-adaptive systems have the capability to autonomously modify their behaviour at run-time in response to changes in their environment. Such systems are now commonly built in domains as diverse as enterprise computing, automotive control systems, and environmental monitoring systems. To date, however, there has been limited attention paid to how to engineer requirements for such systems. As a result, selfadaptivity is often constructed in an ad-hoc manner. In this paper, we argue that a more rigorous treatment of requirements relating to self-adaptivity is needed and that, in particular, requirements languages for self-adaptive systems should include explicit constructs for specifying and dealing with the uncertainty inherent in self-adaptive systems. We present some initial thoughts on a new requirements language for selfadaptive systems and illustrate it using examples from the services domain. © 2008 IEEE.