30 resultados para Distributed parameter control systems
Resumo:
The integrated control of nitrate recirculation and external carbon addition in a predenitrification biological wastewater treatment system is studied. The proposed control structure consists of four feedback control loops, which manipulate the nitrate recirculation and the carbon dosage flows in a highly coordinated manner such that the consumption of external carbon is minimised while the nitrate discharge limits (based on both grab and composite samples) are met. The control system requires the measurement of the nitrate concentrations at the end of both the anoxic and the aerobic zones. Distinct from ordinary control systems, which typically minimise the variation in the controlled variables, the proposed control system essentially maximises the diurnal variation of the effluent nitrate concentration and through this maximises the use of influent COD for denitrification, thus minimising the requirement for external carbon source. Simulation studies using a commonly accepted simulation benchmark show that the controlled system consistently achieves the designated effluent quality with minimum costs.
Resumo:
The development of a strong, active granular sludge bed is necessary for optimal operation of upflow anaerobic sludge blanket reactors. The microbial and mechanical structure of the granules may have a strong influence on desirable properties such as growth rate, settling velocity and shear strength. Theories have been proposed for granule microbial structure based on the relative kinetics of substrate degradation, but contradict some observations from both modelling and microscopic studies. In this paper, the structures of four granule types were examined from full-scale UASB reactors, treating wastewater from a cannery, a slaughterhouse, and two breweries. Microbial structure was determined using fluorescence in situ hybridisation probing with 16S rRNA-directed oligonucleotide probes, and superficial structure and microbial density (volume occupied by cells and microbial debris) assessed using scanning electron microscopy (SEM), and transmission electron microscopy (TEM). The granules were also modelled using a distributed parameter biofilm model, with a previously published biochemical model structure, biofilm modelling approach, and model parameters. The model results reflected the trophic structures observed, indicating that the structures were possibly determined by kinetics. Of particular interest were results from simulations of the protein grown granules, which were predicted to have slow growth rates, low microbial density, and no trophic layers, the last two of which were reflected by microscopic observations. The primary cause of this structure, as assessed by modelling, was the particulate nature of the wastewater, and the slow rate of particulate hydrolysis, rather than the presence of proteins in the wastewater. Because solids hydrolysis was rate limiting, soluble substrate concentrations were very low (below Monod half saturation concentration), which caused low growth rates. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Deregulations and market practices in power industry have brought great challenges to the system planning area. In particular, they introduce a variety of uncertainties to system planning. New techniques are required to cope with such uncertainties. As a promising approach, probabilistic methods are attracting more and more attentions by system planners. In small signal stability analysis, generation control parameters play an important role in determining the stability margin. The objective of this paper is to investigate power system state matrix sensitivity characteristics with respect to system parameter uncertainties with analytical and numerical approaches and to identify those parameters have great impact on system eigenvalues, therefore, the system stability properties. Those identified parameter variations need to be investigated with priority. The results can be used to help Regional Transmission Organizations (RTOs) and Independent System Operators (ISOs) perform planning studies under the open access environment.
Resumo:
A new approach to identify multivariable Hammerstein systems is proposed in this paper. By using cardinal cubic spline functions to model the static nonlinearities, the proposed method is effective in modelling processes with hard and/or coupled nonlinearities. With an appropriate transformation, the nonlinear models are parameterized such that the nonlinear identification problem is converted into a linear one. The persistently exciting condition for the transformed input is derived to ensure the estimates are consistent with the true system. A simulation study is performed to demonstrate the effectiveness of the proposed method compared with the existing approaches based on polynomials. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
We prove upper and lower bounds relating the quantum gate complexity of a unitary operation, U, to the optimal control cost associated to the synthesis of U. These bounds apply for any optimal control problem, and can be used to show that the quantum gate complexity is essentially equivalent to the optimal control cost for a wide range of problems, including time-optimal control and finding minimal distances on certain Riemannian, sub-Riemannian, and Finslerian manifolds. These results generalize the results of [Nielsen, Dowling, Gu, and Doherty, Science 311, 1133 (2006)], which showed that the gate complexity can be related to distances on a Riemannian manifold.
Resumo:
We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
Evolution strategies are a class of general optimisation algorithms which are applicable to functions that are multimodal, nondifferentiable, or even discontinuous. Although recombination operators have been introduced into evolution strategies, the primary search operator is still mutation. Classical evolution strategies rely on Gaussian mutations. A new mutation operator based on the Cauchy distribution is proposed in this paper. It is shown empirically that the new evolution strategy based on Cauchy mutation outperforms the classical evolution strategy on most of the 23 benchmark problems tested in this paper. The paper also shows empirically that changing the order of mutating the objective variables and mutating the strategy parameters does not alter the previous conclusion significantly, and that Cauchy mutations with different scaling parameters still outperform the Gaussian mutation with self-adaptation. However, the advantage of Cauchy mutations disappears when recombination is used in evolution strategies. It is argued that the search step size plays an important role in determining evolution strategies' performance. The large step size of recombination plays a similar role as Cauchy mutation.
Resumo:
Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.
Resumo:
In this paper, the minimum-order stable recursive filter design problem is proposed and investigated. This problem is playing an important role in pipeline implementation sin signal processing. Here, the existence of a high-order stable recursive filter is proved theoretically, in which the upper bound for the highest order of stable filters is given. Then the minimum-order stable linear predictor is obtained via solving an optimization problem. In this paper, the popular genetic algorithm approach is adopted since it is a heuristic probabilistic optimization technique and has been widely used in engineering designs. Finally, an illustrative example is sued to show the effectiveness of the proposed algorithm.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
The fabrication of heavy-duty printer heads involves a great deal of grinding work. Previously in the printer manufacturing industry, four grinding procedures were manually conducted in four grinding machines, respectively. The productivity of the whole grinding process was low due to the long loading time. Also, the machine floor space occupation was large because of the four separate grinding machines. The manual operation also caused inconsistent quality. This paper reports the system and process development of a highly integrated and automated high-speed grinding system for printer heads. The developed system, which is believed to be the first of its kind, not only produces printer heads of consistently good quality, but also significantly reduces the cycle time and machine floor space occupation.
Resumo:
Any given n X n matrix A is shown to be a restriction, to the A-invariant subspace, of a nonnegative N x N matrix B of spectral radius p(B) arbitrarily close to p(A). A difference inclusion x(k+1) is an element of Ax(k), where A is a compact set of matrices, is asymptotically stable if and only if A can be extended to a set B of nonnegative matrices B with \ \B \ \ (1) < 1 or \ \B \ \ (infinity) < 1. Similar results are derived for differential inclusions.
Resumo:
A soft linguistic evaluation method is proposed for the environmental assessment of physical infrastructure projects based on fuzzy relations. Infrastructure projects are characterized in terms of linguistic expressions of 'performance' with respect to factors or impacts and the 'importance' of those factors/impacts. A simple example is developed to illustrate the method in the context of three road infrastructure projects assessed against five factors/impacts. In addition, a means to include hard or crisp factors is presented and illustrated with respect to a sixth factor.