875 resultados para Continuous Variable Systems
Resumo:
An innovative method for modelling biological processes under anaerobic conditions is presented and discussed. The method is based on titrimetric and off-gas measurements. Titrimetric data is recorded as the addition rate of hydroxyl ions or protons that is required to maintain pH in a bioreactor at a constant level. An off-gas analysis arrangement measures, among other things, the transfer rate of carbon dioxide. The integration of these signals results in a continuous signal which is solely related to the biological reactions. When coupled with a mathematical model of the biological reactions, the signal allows a detailed characterisation of these reactions, which would otherwise be difficult to achieve. Two applications of the method to the enhanced biological phosphorus removal processes are presented and discussed to demonstrate the principle and effectiveness of the method.
Resumo:
We demonstrate that the process of generating smooth transitions Call be viewed as a natural result of the filtering operations implied in the generation of discrete-time series observations from the sampling of data from an underlying continuous time process that has undergone a process of structural change. In order to focus discussion, we utilize the problem of estimating the location of abrupt shifts in some simple time series models. This approach will permit its to address salient issues relating to distortions induced by the inherent aggregation associated with discrete-time sampling of continuous time processes experiencing structural change, We also address the issue of how time irreversible structures may be generated within the smooth transition processes. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
The relative merits of different systems of property rights to allocate water among different extractive uses are evaluated for the case where variability of supply is important. Three systems of property rights are considered. In the first, variable supply is dealt with through the use of water entitlements defined as shares of the total quantity available. In the second, there are two types of water entitlements, one for water with a high security of supply and the other a lower security right for the residual supply. The third is a system of entitlements specified as state-contingent claims. With zero transaction costs, all systems are efficient. In the realistic situation where transaction costs matter, the system based on state-contingent claims is globally optimal, and the system with high-security and lower security entitlements is preferable to the system with share entitlements.
Resumo:
Traditional real-time control systems are tightly integrated into the industrial processes they govern. Now, however, there is increasing interest in networked control systems. These provide greater flexibility and cost savings by allowing real-time controllers to interact with industrial processes over existing communications networks. New data packet queuing protocols are currently being developed to enable precise real-time control over a network with variable propagation delays. We show how one such protocol was formally modelled using timed automata, and how model checking was used to reveal subtle aspects of the control system's dynamic behaviour.
Resumo:
Although managers consider accurate, timely, and relevant information as critical to the quality of their decisions, evidence of large variations in data quality abounds. Over a period of twelve months, the action research project reported herein attempted to investigate and track data quality initiatives undertaken by the participating organisation. The investigation focused on two types of errors: transaction input errors and processing errors. Whenever the action research initiative identified non-trivial errors, the participating organisation introduced actions to correct the errors and prevent similar errors in the future. Data quality metrics were taken quarterly to measure improvements resulting from the activities undertaken during the action research project. The action research project results indicated that for a mission-critical database to ensure and maintain data quality, commitment to continuous data quality improvement is necessary. Also, communication among all stakeholders is required to ensure common understanding of data quality improvement goals. The action research project found that to further substantially improve data quality, structural changes within the organisation and to the information systems are sometimes necessary. The major goal of the action research study is to increase the level of data quality awareness within all organisations and to motivate them to examine the importance of achieving and maintaining high-quality data.
Resumo:
Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.
Resumo:
The amplification of demand variation up a supply chain widely termed ‘the Bullwhip Effect’ is disruptive, costly and something that supply chain management generally seeks to minimise. Originally attributed to poor system design; deficiencies in policies, organisation structure and delays in material and information flow all lead to sub-optimal reorder point calculation. It has since been attributed to exogenous random factors such as: uncertainties in demand, supply and distribution lead time but these causes are not exclusive as academic and operational studies since have shown that orders and/or inventories can exhibit significant variability even if customer demand and lead time are deterministic. This increase in the range of possible causes of dynamic behaviour indicates that our understanding of the phenomenon is far from complete. One possible, yet previously unexplored, factor that may influence dynamic behaviour in supply chains is the application and operation of supply chain performance measures. Organisations monitoring and responding to their adopted key performance metrics will make operational changes and this action may influence the level of dynamics within the supply chain, possibly degrading the performance of the very system they were intended to measure. In order to explore this a plausible abstraction of the operational responses to the Supply Chain Council’s SCOR® (Supply Chain Operations Reference) model was incorporated into a classic Beer Game distribution representation, using the dynamic discrete event simulation software Simul8. During the simulation the five SCOR Supply Chain Performance Attributes: Reliability, Responsiveness, Flexibility, Cost and Utilisation were continuously monitored and compared to established targets. Operational adjustments to the; reorder point, transportation modes and production capacity (where appropriate) for three independent supply chain roles were made and the degree of dynamic behaviour in the Supply Chain measured, using the ratio of the standard deviation of upstream demand relative to the standard deviation of the downstream demand. Factors employed to build the detailed model include: variable retail demand, order transmission, transportation delays, production delays, capacity constraints demand multipliers and demand averaging periods. Five dimensions of supply chain performance were monitored independently in three autonomous supply chain roles and operational settings adjusted accordingly. Uniqueness of this research stems from the application of the five SCOR performance attributes with modelled operational responses in a dynamic discrete event simulation model. This project makes its primary contribution to knowledge by measuring the impact, on supply chain dynamics, of applying a representative performance measurement system.
Resumo:
The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian.
Resumo:
There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
The Thouless-Anderson-Palmer (TAP) approach was originally developed for analysing the Sherrington-Kirkpatrick model in the study of spin glass models and has been employed since then mainly in the context of extensively connected systems whereby each dynamical variable interacts weakly with the others. Recently, we extended this method for handling general intensively connected systems where each variable has only O(1) connections characterised by strong couplings. However, the new formulation looks quite different with respect to existing analyses and it is only natural to question whether it actually reproduces known results for systems of extensive connectivity. In this chapter, we apply our formulation of the TAP approach to an extensively connected system, the Hopfield associative memory model, showing that it produces identical results to those obtained by the conventional formulation.
Resumo:
Purpose – To investigate the role of simulation in the introduction of technology in a continuous operations process. Design/methodology/approach – A case-based research method was chosen with the aim to provide an exemplar of practice and test the proposition that the use of simulation can improve the implementation and running of conveyor systems in continuous process facilities. Findings – The research determines the optimum rate of re-introduction of inventory to a conveyor system generated during a breakdown event. Research limitations/implications – More case studies are required demonstrating the operational and strategic benefits that can be gained by using simulation to assess technology in organisations. Practical implications – A practical outcome of the study was the implementation of a policy for the manual re-introduction of inventory on a conveyor line after a breakdown event had occurred. Originality/value – The paper presents a novel example of the use of simulation to estimate the re-introduction rate of inventory after a breakdown event on a conveyor line. The paper highlights how by addressing this operational issue, ahead of implementation, the likelihood of the success of the strategic decision to acquire the technology can be improved.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
Traditional high speed machinery actuators are powered and coordinated by mechanical linkages driven from a central drive, but these linkages may be replaced by independently synchronised electric drives. Problems associated with utilising such electric drives for this form of machinery were investigated. The research concentrated on a high speed rod-making machine, which required control of high inertias (0.01-0.5kgm2), at continuous high speed (2500 r/min), with low relative phase errors between two drives (0.0025 radians). Traditional minimum energy drive selection techniques for incremental motions were not applicable to continuous applications which require negligible energy dissipation. New selection techniques were developed. A brushless configuration constant enabled the comparison between seven different servo systems; the rate earth brushless drives had the best power rates which is a performance measure. Simulation was used to review control strategies, such that a microprocessor controller with a proportional velocity loop within a proportional position loop with velocity feedforward was designed. Local control schemes were investigated as means of reducing relative errors between drives: the slave of a master/slave scheme compensates for the master's errors: the matched scheme has drives with similar absolute errors so the relative error is minimised, and the feedforward scheme minimises error by adding compensation from previous knowledge. Simulation gave an approximate velocity loop bandwidth and position loop gain required to meet the specification. Theoretical limits for these parameters were defined in terms of digital sampling delays, quantisation, and system phase shifts. Performance degradation due to mechanical backlash was evaluated. Thus any drive could be checked to ensure that the performance specification could be realised. A two drive demonstrator was commissioned with 0.01kgm2 loads. By use of simulation the performance of one drive was improved by increasing the velocity loop bandwidth fourfold. With the master/slave scheme relative errors were within 0.0024 radians at a constant 2500 r/min for two 0.01 kgm^2 loads.