43 resultados para Discrete time pricing model
Resumo:
A simple method for training the dynamical behavior of a neural network is derived. It is applicable to any training problem in discrete-time networks with arbitrary feedback. The method resembles back-propagation in that it is a least-squares, gradient-based optimization method, but the optimization is carried out in the hidden part of state space instead of weight space. A straightforward adaptation of this method to feedforward networks offers an alternative to training by conventional back-propagation. Computational results are presented for simple dynamical training problems, with varied success. The failures appear to arise when the method converges to a chaotic attractor. A patch-up for this problem is proposed. The patch-up involves a technique for implementing inequality constraints which may be of interest in its own right.
Resumo:
A discrete event simulation model was developed and used to estimate the storage area required for a proposed overseas textile manufacturing facility. It was found that the simulation was able to achieve this because of its ability to both store attribute values and to show queuing levels at an individual product level. It was also found that the process of undertaking the simulation project initiated useful discussions regarding the operation of the facility. Discrete event simulation is shown to be much more than an exercise in quantitative analysis of results and an important task of the simulation project manager is to initiate a debate among decision makers regarding the assumptions of how the system operates.
Resumo:
The present study investigates the feasibility of using two types of carbomer (971 and 974) to prepare inhalable dry powders that exhibit modified drug release properties. Powders were prepared by spray-drying formulations containing salbutamol sulphate, 20-50% w/w carbomer as a drug release modifier and leucine as an aerosolization enhancer. Following physical characterization of the powders, the aerosolization and dissolution properties of the powders were investigated using a Multi-Stage Liquid Impinger and a modified USP II dissolution apparatus, respectively. All carbomer 974-modified powders and the 20% carbomer 971 powder demonstrated high dispersibility, with emitted doses of at least 80% and fine particle fractions of approximately 40%. The release data indicated that all carbomer-modified powders displayed a sustained release profile, with carbomer 971-modified powders obeying first order kinetics, whereas carbomer 974-modified powders obeyed the Higuchi root time kinetic model; increasing the amount of carbomer 971 in the formulation did not extend the duration of drug release, whereas this was observed for the carbomer 974-modified powders. These powders would be anticipated to deposit predominately in the lower regions of the lung following inhalation and then undergo delayed rather than instantaneous drug release, offering the potential to reduce dosing frequency and improve patient compliance.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
The thesis examines and explains the development of occupational exposure limits (OELs) as a means of preventing work related disease and ill health. The research focuses on the USA and UK and sets the work within a certain historical and social context. A subsidiary aim of the thesis is to identify any short comings in OELs and the methods by which they are set and suggest alternatives. The research framework uses Thomas Kuhn's idea of science progressing by means of paradigms which he describes at one point, `lq ... universally recognised scientific achievements that for a time provide model problems and solutions to a community of practitioners. KUHN (1970). Once learned individuals in the community, `lq ... are committed to the same rules and standards for scientific practice. Ibid. Kuhn's ideas are adapted by combining them with a view of industrial hygiene as an applied science-based profession having many of the qualities of non-scientific professions. The great advantage of this approach to OELs is that it keeps the analysis grounded in the behaviour and priorities of the groups which have forged, propounded, used, benefited from, and defended, them. The development and use of OELs on a larger scale is shown to be connected to the growth of a new profession in the USA; industrial hygiene, with the assistance of another new profession; industrial toxicology. The origins of these professions, particularly industrial hygiene, are traced. By examining the growth of the professions and the writings of key individuals it is possible to show how technical, economic and social factors became embedded in the OEL paradigm which industrial hygienists and toxicologists forged. The origin, mission and needs of these professions and their clients made such influences almost inevitable. The use of the OEL paradigm in practice is examined by an analysis of the process of the American Conference of Governmental Industrial Hygienists, Threshold Limit Value (ACGIH, TLV) Committee via the Minutes from 1962-1984. A similar approach is taken with the development of OELs in the UK. Although the form and definition of TLVs has encouraged the belief that they are health-based OELs the conclusion is that they, and most other OELs, are, and always have been, reasonably practicable limits: the degree of risk posed by a substance is weighed against the feasibility and cost of controlling exposure to that substance. The confusion over the status of TLVs and other OELs is seen to be a confusion at the heart of the OEL paradigm and the historical perspective explains why this should be. The paradigm has prevented the creation of truly health-based and, conversely, truly reasonably practicable OELs. In the final part of the thesis the analysis of the development of OELs is set in a contemporary context and a proposal for a two-stage, two-committee procedure for producing sets of OELs is put forward. This approach is set within an alternative OEL paradigm. The advantages, benefits and likely obstacles to these proposals are discussed.
Resumo:
The research developed in this thesis explores the sensing and inference of human movement in a dynamic way, as opposed to conventional measurement systems, that are only concerned with discrete evaluations of stimuli in sequential time. Typically, conventional approaches are used to infer the dynamic movement of the body; such as vision and motion tracking devices, with either a human diagnosis or complex image processing algorithm to classify the movement. This research is therefore the first of its kind to attempt and provide a movement classifying algorithm through the use of minimal sensing points, with the application for this novel system, to classify human movement during a golf swing. There are two main categories of force sensing. Firstly, array-type systems consisting of many sensing elements, and are the most commonly researched and commercially available. Secondly, reduced force sensing element systems (RFSES) also known as distributive systems have only been recently exploited in the academic world. The fundamental difference between these systems is that array systems handle the data captured from each sensor as unique outputs and suffer the effects of resolution. The effect of resolution, is the error in the load position measurement between sensing elements, as the output is quantized in terms of position. This can be compared to a reduced sensor element system that maximises that data received through the coupling of data from a distribution of sensing points to describe the output in discrete time. Also this can be extended to a coupling of transients in the time domain to describe an activity or dynamic movement. It is the RFSES that is to be examined and exploited in the commercial sector due to its advantages over array-based approaches such as reduced design, computational complexity and cost.
Resumo:
This paper describes a study of the combustion process in an industrial radiant tube burner (RTB), used in heat treating furnaces, as part of an attempt to improve burner performance. A detailed three-dimensional Computational Fluid Dynamics model has been used, validated with experimental test furnace temperature and flue gas composition measurements. Simulations using the Eddy Dissipation combustion model with peak temperature limitation and the Discrete Transfer radiation model showed good agreement with temperature measurements in the inner and outer walls of the burner, as well as with flue gas composition measured at the exhaust (including NO). Other combustion and radiation models were also tested but gave inferior results in various aspects. The effects of certain RTB design features are analysed, and an analysis of the heat transfer processes within the burner is presented.
Resumo:
Control design for stochastic uncertain nonlinear systems is traditionally based on minimizing the expected value of a suitably chosen loss function. Moreover, most control methods usually assume the certainty equivalence principle to simplify the problem and make it computationally tractable. We offer an improved probabilistic framework which is not constrained by these previous assumptions, and provides a more natural framework for incorporating and dealing with uncertainty. The focus of this paper is on developing this framework to obtain an optimal control law strategy using a fully probabilistic approach for information extraction from process data, which does not require detailed knowledge of system dynamics. Moreover, the proposed control method framework allows handling the problem of input-dependent noise. A basic paradigm is proposed and the resulting algorithm is discussed. The proposed probabilistic control method is for the general nonlinear class of discrete-time systems. It is demonstrated theoretically on the affine class. A nonlinear simulation example is also provided to validate theoretical development.
Resumo:
Requirements-aware systems address the need to reason about uncertainty at runtime to support adaptation decisions, by representing quality of services (QoS) requirements for service-based systems (SBS) with precise values in run-time queryable model specification. However, current approaches do not support updating of the specification to reflect changes in the service market, like newly available services or improved QoS of existing ones. Thus, even if the specification models reflect design-time acceptable requirements they may become obsolete and miss opportunities for system improvement by self-adaptation. This articles proposes to distinguish "abstract" and "concrete" specification models: the former consists of linguistic variables (e.g. "fast") agreed upon at design time, and the latter consists of precise numeric values (e.g. "2ms") that are dynamically calculated at run-time, thus incorporating up-to-date QoS information. If and when freshly calculated concrete specifications are not satisfied anymore by the current service configuration, an adaptation is triggered. The approach was validated using four simulated SBS that use services from a previously published, real-world dataset; in all cases, the system was able to detect unsatisfied requirements at run-time and trigger suitable adaptations. Ongoing work focuses on policies to determine recalculation of specifications. This approach will allow engineers to build SBS that can be protected against market-caused obsolescence of their requirements specifications. © 2012 IEEE.
Resumo:
This work attempts to shed light to the fundamental concepts behind the stability of Multi-Agent Systems. We view the system as a discrete time Markov chain with a potentially unknown transitional probability distribution. The system will be considered to be stable when its state has converged to an equilibrium distribution. Faced with the non-trivial task of establishing the convergence to such a distribution, we propose a hypothesis testing approach according to which we test whether the convergence of a particular system metric has occurred. We describe some artificial multi-agent ecosystems that were developed and we present results based on these systems which confirm that this approach qualitatively agrees with our intuition.
Resumo:
A probabilistic indirect adaptive controller is proposed for the general nonlinear multivariate class of discrete time system. The proposed probabilistic framework incorporates input–dependent noise prediction parameters in the derivation of the optimal control law. Moreover, because noise can be nonstationary in practice, the proposed adaptive control algorithm provides an elegant method for estimating and tracking the noise. For illustration purposes, the developed method is applied to the affine class of nonlinear multivariate discrete time systems and the desired result is obtained: the optimal control law is determined by solving a cubic equation and the distribution of the tracking error is shown to be Gaussian with zero mean. The efficiency of the proposed scheme is demonstrated numerically through the simulation of an affine nonlinear system.
Resumo:
A Eulerian-Eulerian CFD model was used to investigate the fast pyrolysis of biomass in a downer reactor equipped with a novel gas-solid separation mechanism. The highly endothermic pyrolysis reaction was assumed to be entirely driven by an inert solid heat carrier (sand). A one-step global pyrolysis reaction, along with the equations describing the biomass drying and heat transfer, was implemented in the hydrodynamic model presented in part I of this study (Fuel Processing Technology, V126, 366-382). The predictions of the gas-solid separation efficiency, temperature distribution, residence time and the pyrolysis product yield are presented and discussed. For the operating conditions considered, the devolatilisation efficiency was found to be above 60% and the yield composition in mass fraction was 56.85% bio-oil, 37.87% bio-char and 5.28% non-condensable gas (NCG). This has been found to agree reasonably well with recent relevant published experimental data. The novel gas-solid separation mechanism allowed achieving greater than 99.9% separation efficiency and < 2 s pyrolysis gas residence time. The model has been found to be robust and fast in terms of computational time, thus has the great potential to aid in future design and optimisation of the biomass fast pyrolysis process.
Resumo:
This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.