872 resultados para Dynamic Emission Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A CSSL- type modular FORTRAN package, called ACES, has been developed to assist in the simulation of the dynamic behaviour of chemical plant. ACES can be harnessed, for instance, to simulate the transients in startups or after a throughput change. ACES has benefited from two existing simulators. The structure was adapted from ICL SLAM and most plant models originate in DYFLO. The latter employs sequential modularisation which is not always applicable to chemical engineering problems. A novel device of twice- round execution enables ACES to achieve general simultaneous modularisation. During the FIRST ROUND, STATE-VARIABLES are retrieved from the integrator and local calculations performed. During the SECOND ROUND, fresh derivatives are estimated and stored for simultaneous integration. ACES further includes a version of DIFSUB, a variable-step integrator capable of handling stiff differential systems. ACES is highly formalised . It does not use pseudo steady- state approximations and excludes inconsistent and arbitrary features of DYFLO. Built- in debug traps make ACES robust. ACES shows generality, flexibility, versatility and portability, and is very convenient to use. It undertakes substantial housekeeping behind the scenes and thus minimises the detailed involvement of the user. ACES provides a working set of defaults for simulation to proceed as far as possible. Built- in interfaces allow for reactions and user supplied algorithms to be incorporated . New plant models can be easily appended. Boundary- value problems and optimisation may be tackled using the RERUN feature. ACES is file oriented; a STATE can be saved in a readable form and reactivated later. Thus piecewise simulation is possible. ACES has been illustrated and verified to a large extent using some literature-based examples. Actual plant tests are desirable however to complete the verification of the library. Interaction and graphics are recommended for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Keyword identification in one of two simultaneous sentences is improved when the sentences differ in F0, particularly when they are almost continuously voiced. Sentences of this kind were recorded, monotonised using PSOLA, and re-synthesised to give a range of harmonic ?F0s (0, 1, 3, and 10 semitones). They were additionally re-synthesised by LPC with the LPC residual frequency shifted by 25% of F0, to give excitation with inharmonic but regularly spaced components. Perceptual identification of frequency-shifted sentences showed a similar large improvement with nominal ?F0 as seen for harmonic sentences, although overall performance was about 10% poorer. We compared performance with that of two autocorrelation-based computational models comprising four stages: (i) peripheral frequency selectivity and half-wave rectification; (ii) within-channel periodicity extraction; (iii) identification of the two major peaks in the summary autocorrelation function (SACF); (iv) a template-based approach to speech recognition using dynamic time warping. One model sampled the correlogram at the target-F0 period and performed spectral matching; the other deselected channels dominated by the interferer and performed matching on the short-lag portion of the residual SACF. Both models reproduced the monotonic increase observed in human performance with increasing ?F0 for the harmonic stimuli, but not for the frequency-shifted stimuli. A revised version of the spectral-matching model, which groups patterns of periodicity that lie on a curve in the frequency-delay plane, showed a closer match to the perceptual data for frequency-shifted sentences. The results extend the range of phenomena originally attributed to harmonic processing to grouping by common spectral pattern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the design and implementation of an interactive dynamic simulator called DASPRII. The starting point of this research has been an existing dynamic simulation package, DASP. DASPII is written in standard FORTRAN 77 and is implemented on universally available IBM-PC or compatible machines. It provides a means for the analysis and design of chemical processes. Industrial interest in dynamic simulation has increased due to the recent increase in concern over plant operability, resiliency and safety. DASPII is an equation oriented simulation package which allows solution of dynamic and steady state equations. The steady state can be used to initialise the dynamic simulation. A robust non linear algebraic equation solver has been implemented for steady state solution. This has increased the general robustness of DASPII, compared to DASP. A graphical front end is used to generate the process flowsheet topology from a user constructed diagram of the process. A conversational interface is used to interrogate the user with the aid of a database, to complete the topological information. An original modelling strategy implemented in DASPII provides a simple mechanism for parameter switching which creates a more flexible simulation environment. The problem description generated is by a further conversational procedure using a data-base. The model format used allows the same model equations to be used for dynamic and steady state solution. All the useful features of DASPI are retained in DASPII. The program has been demonstrated and verified using a number of example problems, Significant improvements using the new NLAE solver have been shown. Topics requiring further research are described. The benefits of variable switching in models has been demonstrated with a literature problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amongst all the objectives in the study of time series, uncovering the dynamic law of its generation is probably the most important. When the underlying dynamics are not available, time series modelling consists of developing a model which best explains a sequence of observations. In this thesis, we consider hidden space models for analysing and describing time series. We first provide an introduction to the principal concepts of hidden state models and draw an analogy between hidden Markov models and state space models. Central ideas such as hidden state inference or parameter estimation are reviewed in detail. A key part of multivariate time series analysis is identifying the delay between different variables. We present a novel approach for time delay estimating in a non-stationary environment. The technique makes use of hidden Markov models and we demonstrate its application for estimating a crucial parameter in the oil industry. We then focus on hybrid models that we call dynamical local models. These models combine and generalise hidden Markov models and state space models. Probabilistic inference is unfortunately computationally intractable and we show how to make use of variational techniques for approximating the posterior distribution over the hidden state variables. Experimental simulations on synthetic and real-world data demonstrate the application of dynamical local models for segmenting a time series into regimes and providing predictive distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software development methodologies are becoming increasingly abstract, progressing from low level assembly and implementation languages such as C and Ada, to component based approaches that can be used to assemble applications using technologies such as JavaBeans and the .NET framework. Meanwhile, model driven approaches emphasise the role of higher level models and notations, and embody a process of automatically deriving lower level representations and concrete software implementations. The relationship between data and software is also evolving. Modern data formats are becoming increasingly standardised, open and empowered in order to support a growing need to share data in both academia and industry. Many contemporary data formats, most notably those based on XML, are self-describing, able to specify valid data structure and content, and can also describe data manipulations and transformations. Furthermore, while applications of the past have made extensive use of data, the runtime behaviour of future applications may be driven by data, as demonstrated by the field of dynamic data driven application systems. The combination of empowered data formats and high level software development methodologies forms the basis of modern game development technologies, which drive software capabilities and runtime behaviour using empowered data formats describing game content. While low level libraries provide optimised runtime execution, content data is used to drive a wide variety of interactive and immersive experiences. This thesis describes the Fluid project, which combines component based software development and game development technologies in order to define novel component technologies for the description of data driven component based applications. The thesis makes explicit contributions to the fields of component based software development and visualisation of spatiotemporal scenes, and also describes potential implications for game development technologies. The thesis also proposes a number of developments in dynamic data driven application systems in order to further empower the role of data in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis is concerned with the dynamic behaviour of structural joints which are both loaded, and excited, normal to the joint interface. Since the forces on joints are transmitted through their interface, the surface texture of joints was carefully examined. A computerised surface measuring system was developed and computer programs were written. Surface flatness was functionally defined, measured and quantised into a form suitable for the theoretical calculation of the joint stiffness. Dynamic stiffness and damping were measured at various preloads for a range of joints with different surface textures. Dry clean and lubricated joints were tested and the results indicated an increase in damping for the lubricated joints of between 30 to 100 times. A theoretical model for the computation of the stiffness of dry clean joints was built. The model is based on the theory that the elastic recovery of joints is due to the recovery of the material behind the loaded asperities. It takes into account, in a quantitative manner, the flatness deviations present on the surfaces of the joint. The theoretical results were found to be in good agreement with those measured experimentally. It was also found that theoretical assessment of the joint stiffness could be carried out using a different model based on the recovery of loaded asperities into a spherical form. Stepwise procedures are given in order to design a joint having a particular stiffness. A theoretical model for the loss factor of dry clean joints was built. The theoretical results are in reasonable agreement with those experimentally measured. The theoretical models for the stiffness and loss factor were employed to evaluate the second natural frequency of the test rig. The results are in good agreement with the experimentally measured natural frequencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is an Inter-Disciplinary Higher Degree (IHD) thesis about Water Pollution Control in the Iron and Steel Industry. After examining the compositions, and various treatment methods, for the major effluent streams from a typical Integrated Iron and Steel works, it was decided to concentrate investigative work on the activated-sludge treatment of coke-oven effluents. A mathematical model of this process was developed in an attempt to provide a tool for plant management that would enable improved performance, and enhanced control of Works Units. The model differs from conventional models in that allowance is made for the presence of two genera of microorganisms, each of which utilises a particular type of substrate as its energy source. Allowance is also made for the inhibitive effect of phenol on thiocyanate biodegradation, and for the self-toxicity of the bacteria when present in a high substrate concentration environment. The enumeration of the kinetic characteristics of the two groups of micro-organisms was shown to be of major importance. Laboratory experiments were instigated in an attempt to determine accurate values of these coefficients. The use of the Suspended Solids concentration was found to be too insensitive a measure of viable active mass. Other measures were investigated, and Adenosine Triphosphate concentration was chosen as the most effective measure of bacterial populations. Using this measure, a model was developed for phenol biodegradation from experimental results which implicated the possibility of storage of substate prior to metabolism. A model for thiocyanate biodegradation was also developed, although the experimental results indicate that much work is still required in this area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper discusses preliminary work on modeling and validation dynamic adaptation. The proposed approach is on the use of aspect-oriented modeling (AOM) and models at runtime. Our approach covers design and runtime phases. At design-time, a base model and different variant architecture models are designed and the adaptation model is built. Crucially, the adaptation model includes invariant properties and constraints that allow the validation of the adaptation rules before execution. During runtime, the adaptation model is processed to produce a correct system configuration that can be executed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasingly software systems are required to survive variations in their execution environment without or with only little human intervention. Such systems are called "eternal software systems". In contrast to the traditional view of development and execution as separate cycles, these modern software systems should not present such a separation. Research in MDE has been primarily concerned with the use of models during the first cycle or development (i.e. during the design, implementation, and deployment) and has shown excellent results. In this paper the author argues that an eternal software system must have a first-class representation of itself available to enable change. These runtime representations (or runtime models) will depend on the kind of dynamic changes that we want to make available during execution or on the kind of analysis we want the system to support. Hence, different models can be conceived. Self-representation inevitably implies the use of reflection. In this paper the author briefly summarizes research that supports the use of runtime models, and points out different issues and research questions. © 2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncertainty can be defined as the difference between information that is represented in an executing system and the information that is both measurable and available about the system at a certain point in its life-time. A software system can be exposed to multiple sources of uncertainty produced by, for example, ambiguous requirements and unpredictable execution environments. A runtime model is a dynamic knowledge base that abstracts useful information about the system, its operational context and the extent to which the system meets its stakeholders' needs. A software system can successfully operate in multiple dynamic contexts by using runtime models that augment information available at design-time with information monitored at runtime. This chapter explores the role of runtime models as a means to cope with uncertainty. To this end, we introduce a well-suited terminology about models, runtime models and uncertainty and present a state-of-the-art summary on model-based techniques for addressing uncertainty both at development- and runtime. Using a case study about robot systems we discuss how current techniques and the MAPE-K loop can be used together to tackle uncertainty. Furthermore, we propose possible extensions of the MAPE-K loop architecture with runtime models to further handle uncertainty at runtime. The chapter concludes by identifying key challenges, and enabling technologies for using runtime models to address uncertainty, and also identifies closely related research communities that can foster ideas for resolving the challenges raised. © 2014 Springer International Publishing.