33 resultados para Dynamic Emission Models
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Resumo:
The work presented in this thesis is concerned with the dynamic behaviour of structural joints which are both loaded, and excited, normal to the joint interface. Since the forces on joints are transmitted through their interface, the surface texture of joints was carefully examined. A computerised surface measuring system was developed and computer programs were written. Surface flatness was functionally defined, measured and quantised into a form suitable for the theoretical calculation of the joint stiffness. Dynamic stiffness and damping were measured at various preloads for a range of joints with different surface textures. Dry clean and lubricated joints were tested and the results indicated an increase in damping for the lubricated joints of between 30 to 100 times. A theoretical model for the computation of the stiffness of dry clean joints was built. The model is based on the theory that the elastic recovery of joints is due to the recovery of the material behind the loaded asperities. It takes into account, in a quantitative manner, the flatness deviations present on the surfaces of the joint. The theoretical results were found to be in good agreement with those measured experimentally. It was also found that theoretical assessment of the joint stiffness could be carried out using a different model based on the recovery of loaded asperities into a spherical form. Stepwise procedures are given in order to design a joint having a particular stiffness. A theoretical model for the loss factor of dry clean joints was built. The theoretical results are in reasonable agreement with those experimentally measured. The theoretical models for the stiffness and loss factor were employed to evaluate the second natural frequency of the test rig. The results are in good agreement with the experimentally measured natural frequencies.
Resumo:
This is an Inter-Disciplinary Higher Degree (IHD) thesis about Water Pollution Control in the Iron and Steel Industry. After examining the compositions, and various treatment methods, for the major effluent streams from a typical Integrated Iron and Steel works, it was decided to concentrate investigative work on the activated-sludge treatment of coke-oven effluents. A mathematical model of this process was developed in an attempt to provide a tool for plant management that would enable improved performance, and enhanced control of Works Units. The model differs from conventional models in that allowance is made for the presence of two genera of microorganisms, each of which utilises a particular type of substrate as its energy source. Allowance is also made for the inhibitive effect of phenol on thiocyanate biodegradation, and for the self-toxicity of the bacteria when present in a high substrate concentration environment. The enumeration of the kinetic characteristics of the two groups of micro-organisms was shown to be of major importance. Laboratory experiments were instigated in an attempt to determine accurate values of these coefficients. The use of the Suspended Solids concentration was found to be too insensitive a measure of viable active mass. Other measures were investigated, and Adenosine Triphosphate concentration was chosen as the most effective measure of bacterial populations. Using this measure, a model was developed for phenol biodegradation from experimental results which implicated the possibility of storage of substate prior to metabolism. A model for thiocyanate biodegradation was also developed, although the experimental results indicate that much work is still required in this area.
Resumo:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
Resumo:
Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.
Resumo:
This paper discusses preliminary work on modeling and validation dynamic adaptation. The proposed approach is on the use of aspect-oriented modeling (AOM) and models at runtime. Our approach covers design and runtime phases. At design-time, a base model and different variant architecture models are designed and the adaptation model is built. Crucially, the adaptation model includes invariant properties and constraints that allow the validation of the adaptation rules before execution. During runtime, the adaptation model is processed to produce a correct system configuration that can be executed.
Resumo:
Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
Resumo:
Increasingly software systems are required to survive variations in their execution environment without or with only little human intervention. Such systems are called "eternal software systems". In contrast to the traditional view of development and execution as separate cycles, these modern software systems should not present such a separation. Research in MDE has been primarily concerned with the use of models during the first cycle or development (i.e. during the design, implementation, and deployment) and has shown excellent results. In this paper the author argues that an eternal software system must have a first-class representation of itself available to enable change. These runtime representations (or runtime models) will depend on the kind of dynamic changes that we want to make available during execution or on the kind of analysis we want the system to support. Hence, different models can be conceived. Self-representation inevitably implies the use of reflection. In this paper the author briefly summarizes research that supports the use of runtime models, and points out different issues and research questions. © 2009 IEEE.
Resumo:
Uncertainty can be defined as the difference between information that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stakeholders' needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used together to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key challenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. © 2014 Springer International Publishing.
Resumo:
Starting from a continuum description, we study the nonequilibrium roughening of a thermal re-emission model for etching in one and two spatial dimensions. Using standard analytical techniques, we map our problem to a generalized version of an earlier nonlocal KPZ (Kardar-Parisi-Zhang) model. In 2 + 1 dimensions, the values of the roughness and the dynamic exponents calculated from our theory go like α ≈ z ≈ 1 and in 1 + 1 dimensions, the exponents resemble the KPZ values for low vapor pressure, supporting experimental results. Interestingly, Galilean invariance is maintained throughout.
Resumo:
Supply chain formation (SCF) is the process of determining the set of participants and exchange relationships within a network with the goal of setting up a supply chain that meets some predefined social objective. Many proposed solutions for the SCF problem rely on centralized computation, which presents a single point of failure and can also lead to problems with scalability. Decentralized techniques that aid supply chain emergence offer a more robust and scalable approach by allowing participants to deliberate between themselves about the structure of the optimal supply chain. Current decentralized supply chain emergence mechanisms are only able to deal with simplistic scenarios in which goods are produced and traded in single units only and without taking into account production capacities or input-output ratios other than 1:1. In this paper, we demonstrate the performance of a graphical inference technique, max-sum loopy belief propagation (LBP), in a complex multiunit unit supply chain emergence scenario which models additional constraints such as production capacities and input-to-output ratios. We also provide results demonstrating the performance of LBP in dynamic environments, where the properties and composition of participants are altered as the algorithm is running. Our results suggest that max-sum LBP produces consistently strong solutions on a variety of network structures in a multiunit problem scenario, and that performance tends not to be affected by on-the-fly changes to the properties or composition of participants.
Resumo:
Dynamic asset rating (DAR) is one of the number of techniques that could be used to facilitate low carbon electricity network operation. Previous work has looked at this technique from an asset perspective. This paper focuses, instead, from a network perspective by proposing a dynamic network rating (DNR) approach. The models available for use with DAR are discussed and compared using measured load and weather data from a trial network area within Milton Keynes in the central area of the U.K. This paper then uses the most appropriate model to investigate, through a network case study, the potential gains in dynamic rating compared to static rating for the different network assets - transformers, overhead lines, and cables. This will inform the network operator of the potential DNR gains on an 11-kV network with all assets present and highlight the limiting assets within each season.
Resumo:
Dynamic asset rating is one of a number of techniques that could be used to facilitate low carbon electricity network operation. This paper focusses on distribution level transformer dynamic rating under this context. The models available for use with dynamic asset rating are discussed and compared using measured load and weather conditions from a trial Network area within Milton Keynes. The paper then uses the most appropriate model to investigate, through simulation, the potential gains in dynamic rating compared to static rating under two transformer cooling methods to understand the potential gain to the Network Operator.
Resumo:
When machining a large-scale aerospace part, the part is normally located and clamped firmly until a set of features are machined. When the part is released, its size and shape may deform beyond the tolerance limits due to stress release. This paper presents the design of a new fixing method and flexible fixtures that would automatically respond to workpiece deformation during machining. Deformation is inspected and monitored on-line, and part location and orientation can be adjusted timely to ensure follow-up operations are carried out under low stress and with respect to the related datum defined in the design models.