991 resultados para Optimal regulation
Resumo:
The optimal bounded control of quasi-integrable Hamiltonian systems with wide-band random excitation for minimizing their first-passage failure is investigated. First, a stochastic averaging method for multi-degrees-of-freedom (MDOF) strongly nonlinear quasi-integrable Hamiltonian systems with wide-band stationary random excitations using generalized harmonic functions is proposed. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximizinig reliability and maximizing mean first-passage time are formulated based on the averaged It$\ddot{\rm o}$ equations by applying the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraints. The relationship between the dynamical programming equations and the backward Kolmogorov equation for the conditional reliability function and the Pontryagin equation for the conditional mean first-passage time of optimally controlled system is discussed. Finally, the conditional reliability function, the conditional probability density and mean of first-passage time of an optimally controlled system are obtained by solving the backward Kolmogorov equation and Pontryagin equation. The application of the proposed procedure and effectiveness of control strategy are illustrated with an example.
Resumo:
We present a statistical model-based approach to signal enhancement in the case of additive broadband noise. Because broadband noise is localised in neither time nor frequency, its removal is one of the most pervasive and difficult signal enhancement tasks. In order to improve perceived signal quality, we take advantage of human perception and define a best estimate of the original signal in terms of a cost function incorporating perceptual optimality criteria. We derive the resultant signal estimator and implement it in a short-time spectral attenuation framework. Audio examples, references, and further information may be found at http://www-sigproc.eng.cam.ac.uk/~pjw47.
Resumo:
The sensor scheduling problem can be formulated as a controlled hidden Markov model and this paper solves the problem when the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. The aim is to minimise the variance of the estimation error of the hidden state w.r.t. the action sequence. We present a novel simulation-based method that uses a stochastic gradient algorithm to find optimal actions. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Salmonella enterica serovar Typhi, the agent of typhoid fever in humans, expresses the surface Vi polysaccharide antigen that contributes to virulence. However, Vi expression can also be detrimental to some key steps of S. Typhi infectivity, for example, invasion, and Vi is the target of protective immune responses. We used a strain of S. Typhimurium carrying the whole Salmonella pathogenicity island 7 (SPI-7) to monitor in vivo Vi expression within phagocytic cells of mice at different times after systemic infection. We also tested whether it is possible to modulate Vi expression via the use of in vivo-inducible promoters and whether this would trigger anti-Vi antibodies through the use of Vi-expressing live bacteria. Our results show that Vi expression in the liver and spleen is downregulated with the progression of infection and that the Vi-negative population of bacteria becomes prevalent by day 4 postinfection. Furthermore, we showed that replacing the natural tviA promoter with the promoter of the SPI-2 gene ssaG resulted in sustained Vi expression in the tissues. Intravenous or oral infection of mice with a strain of S. Typhimurium expressing Vi under the control of the ssaG promoter triggered detectable levels of all IgG subclasses specific for Vi. Our work highlights that Vi is downregulated in vivo and provides proof of principle that it is possible to generate a live attenuated vaccine that induces Vi-specific antibodies after single oral administration.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
A procedure for designing the optimal bounded control of strongly non-linear oscillators under combined harmonic and white-noise excitations for minimizing their first-passage failure is proposed. First, a stochastic averaging method for strongly non-linear oscillators under combined harmonic and white-noise excitations using generalized harmonic functions is introduced. Then, the dynamical programming equations and their boundary and final time conditions for the control problems of maximizing reliability and of maximizing mean first-passage time are formulated from the averaged Ito equations by using the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraint. Finally, the conditional reliability function, the conditional probability density and mean of the first-passage time of the optimally controlled system are obtained from solving the backward Kolmogorov equation and Pontryagin equation. An example is given to illustrate the proposed procedure and the results obtained are verified by using those from digital simulation. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
This paper explores the use of Monte Carlo techniques in deterministic nonlinear optimal control. Inter-dimensional population Markov Chain Monte Carlo (MCMC) techniques are proposed to solve the nonlinear optimal control problem. The linear quadratic and Acrobot problems are studied to demonstrate the successful application of the relevant techniques.