93 resultados para Flowering time control
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
Several MPC applications implement a control strategy in which some of the system outputs are controlled within specified ranges or zones, rather than at fixed set points [J.M. Maciejowski, Predictive Control with Constraints, Prentice Hall, New Jersey, 2002]. This means that these outputs will be treated as controlled variables only when the predicted future values lie outside the boundary of their corresponding zones. The zone control is usually implemented by selecting an appropriate weighting matrix for the output error in the control cost function. When an output prediction is inside its zone, the corresponding weight is zeroed, so that the controller ignores this output. When the output prediction lies outside the zone, the error weight is made equal to a specified value and the distance between the output prediction and the boundary of the zone is minimized. The main problem of this approach, as long as stability of the closed loop is concerned, is that each time an output is switched from the status of non-controlled to the status of controlled, or vice versa, a different linear controller is activated. Thus, throughout the continuous operation of the process, the control system keeps switching from one controller to another. Even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. Here, a stable M PC is developed for the zone control of open-loop stable systems. Focusing on the practical application of the proposed controller, it is assumed that in the control structure of the process system there is an upper optimization layer that defines optimal targets to the system inputs. The performance of the proposed strategy is illustrated by simulation of a subsystem of an industrial FCC system. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we devise a separation principle for the finite horizon quadratic optimal control problem of continuous-time Markovian jump linear systems driven by a Wiener process and with partial observations. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati differential equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a separation principle for the finite horizon quadratic optimal control problem for continuous-time Markovian jump linear systems. For the case in which the matrices are all time-invariant we analyze the asymptotic behavior of the solution of the derived interconnected Riccati differential equations to the solution of the associated set of coupled algebraic Riccati equations as well as the mean square stabilizing property of this limiting solution. When there is only one mode of operation our results coincide with the traditional ones for the LQG control of continuous-time linear systems.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
In this technical note we consider the mean-variance hedging problem of a jump diffusion continuous state space financial model with the re-balancing strategies for the hedging portfolio taken at discrete times, a situation that more closely reflects real market conditions. A direct expression based on some change of measures, not depending on any recursions, is derived for the optimal hedging strategy as well as for the ""fair hedging price"" considering any given payoff. For the case of a European call option these expressions can be evaluated in a closed form.
Resumo:
This work is concerned with the existence of an optimal control strategy for the long-run average continuous control problem of piecewise-deterministic Markov processes (PDMPs). In Costa and Dufour (2008), sufficient conditions were derived to ensure the existence of an optimal control by using the vanishing discount approach. These conditions were mainly expressed in terms of the relative difference of the alpha-discount value functions. The main goal of this paper is to derive tractable conditions directly related to the primitive data of the PDMP to ensure the existence of an optimal control. The present work can be seen as a continuation of the results derived in Costa and Dufour (2008). Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP. An example based on the capacity expansion problem is presented, illustrating the possible applications of the results developed in the paper.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
This work summarizes some results about static state feedback linearization for time-varying systems. Three different necessary and sufficient conditions are stated in this paper. The first condition is the one by [Sluis, W. M. (1993). A necessary condition for dynamic feedback linearization. Systems & Control Letters, 21, 277-283]. The second and the third are the generalizations of known results due respectively to [Aranda-Bricaire, E., Moog, C. H., Pomet, J. B. (1995). A linear algebraic framework for dynamic feedback linearization. IEEE Transactions on Automatic Control, 40, 127-132] and to [Jakubczyk, B., Respondek, W. (1980). On linearization of control systems. Bulletin del` Academie Polonaise des Sciences. Serie des Sciences Mathematiques, 28, 517-522]. The proofs of the second and third conditions are established by showing the equivalence between these three conditions. The results are re-stated in the infinite dimensional geometric approach of [Fliess, M., Levine J., Martin, P., Rouchon, P. (1999). A Lie-Backlund approach to equivalence and flatness of nonlinear systems. IEEE Transactions on Automatic Control, 44(5), 922-937]. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Postural control was studied when the subject was kneeling with erect trunk in a quiet posture and compared to that obtained during quiet standing. The analysis was based on the center of pressure motion in the sagittal plane (CPx), both in the time and in the frequency domains. One could assume that postural control during kneeling would be poorer than in standing because it is a less natural posture. This could cause a higher CPx variability. The power spectral density (PSD) of the CPx obtained from the experimental data in the kneeling position (KN) showed a significant decrease at frequencies below 0.3 Hz compared to upright (UP) (P < 0.01), which indicates less sway in KN. Conversely, there was an increase in fast postural oscillations (above 0.7 Hz) during KN compared to UP (P < 0.05). The root mean square (RMS) of the CPx was higher for UP (P < 0.01) while the mean velocity (MV) was higher during KN (P < 0.05). Lack of vision had a significant effect on the PSD and the parameters estimated from the CPx in both positions. We also sought to verify whether the changes in the PSD of the CPx found between the UP and KN positions were exclusively due to biomechanical factors (e.g., lowered center of gravity), or also reflected changes in the neural processes involved in the control of balance. To reach this goal, besides the experimental approach, a simple feedback model (a PID neural system, with added neural noise and controlling an inverted pendulum) was used to simulate postural sway in both conditions (in KN the pendulum was shortened, the mass and the moment of inertia were decreased). A parameter optimization method was used to fit the CPx power spectrum given by the model to that obtained experimentally. The results indicated that the changed anthropometric parameters in KN would indeed cause a large decrease in the power spectrum at low frequencies. However, the model fitting also showed that there were considerable changes also in the neural subsystem when the subject went from standing to kneeling. There was a lowering of the proportional and derivative gains and an increase in the neural noise power. Additional increases in the neural noise power were found also when the subject closed his eyes.
Resumo:
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP`s) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space a""e (n) . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter epsilon > 0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as epsilon goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as epsilon goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Resumo:
In this paper we consider the existence of the maximal and mean square stabilizing solutions for a set of generalized coupled algebraic Riccati equations (GCARE for short) associated to the infinite-horizon stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a sufficient condition, based only on some positive semi-definite and kernel restrictions on some matrices, under which there exists the maximal solution and a necessary and sufficient condition under which there exists the mean square stabilizing solution fir the GCARE. We also present a solution for the discounted and long run average cost problems when the performance criterion is assumed be composed by a linear combination of an indefinite quadratic part and a linear part in the state and control variables. The paper is concluded with a numerical example for pension fund with regime switching.
Resumo:
In this paper we obtain the linear minimum mean square estimator (LMMSE) for discrete-time linear systems subject to state and measurement multiplicative noises and Markov jumps on the parameters. It is assumed that the Markov chain is not available. By using geometric arguments we obtain a Kalman type filter conveniently implementable in a recurrence form. The stationary case is also studied and a proof for the convergence of the error covariance matrix of the LMMSE to a stationary value under the assumption of mean square stability of the system and ergodicity of the associated Markov chain is obtained. It is shown that there exists a unique positive semi-definite solution for the stationary Riccati-like filter equation and, moreover, this solution is the limit of the error covariance matrix of the LMMSE. The advantage of this scheme is that it is very easy to implement and all calculations can be performed offline. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work considers a nonlinear time-varying system described by a state representation, with input u and state x. A given set of functions v, which is not necessarily the original input u of the system, is the (new) input candidate. The main result provides necessary and sufficient conditions for the existence of a local classical state space representation with input v. These conditions rely on integrability tests that are based on a derived flag. As a byproduct, one obtains a sufficient condition of differential flatness of nonlinear systems. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The growth of Eucalyptus stands varies several fold across sites, under the influence of resource availability, stand age and stand structure. We describe a series of related studies that aim to understand the mechanisms that drive this great range in stand growth rates. In a seven-year study in Hawaii of Eucalyptus saligna at a site that was not water limited, we showed that nutrient availability differences led to a two-fold difference in stand wood production. Increasing nutrient supply in mid-rotation raised productivity to the level attained in continuously fertilised plots. Fertility affected the age-related decline in wood and foliage production; production in the intensive fertility treatments declined more slowly than in the minimal fertility treatments. The decline in stem production was driven largely by a decline in canopy photosynthesis. Over time, the fraction of canopy photosynthesis partitioned to below-ground allocation increased, as did foliar respiration, further reducing wood production. The reason for the decline in photosynthesis was uncertain, but it was not caused by nutrient limitation, a decline in leaf area or in photosynthetic capacity, or by hydraulic limitation. Most of the increase in carbon stored from conversion of the sugarcane plantation to Eucalyptus plantation was in the above-ground woody biomass. Soil carbon showed no net change. This study and other studies on carbon allocation showed that resource availability changes the fraction of annual photosynthesis used below-ground and for wood production. High resources (nutrition or water) decrease the partitioning below-ground and increase partitioning to wood production. Annual foliage and wood respiration and foliage production as a fraction of annual photosynthesis was remarkably constant across a wide range of fertility treatments and forest age. In the Brazil Eucalyptus Productivity Project, stand structure was manipulated by planting clonal Eucalyptus all at once or in three groups at three-monthly intervals, producing a stand where trees did not segregate into dominants and one that had strong dominance. The uneven stand structure reduced production 10-15% throughout the rotation.
Resumo:
Background: The aim of this study was to identify novel candidate biomarker proteins differentially expressed in the plasma of patients with early stage acute myocardial infarction (AMI) using SELDI-TOF-MS as a high throughput screening technology. Methods: Ten individuals with recent acute ischemic-type chest pain (< 12 h duration) and ST-segment elevation AMI (1STEMI) and after a second AMI (2STEMI) were selected. Blood samples were drawn at six times after STEMI diagnosis. The first stage (T(0)) was in Emergency Unit before receiving any medication, the second was just after primary angioplasty (T(2)), and the next four stages occurred at 12 h intervals after T(0). Individuals (n = 7) with similar risk factors for cardiovascular disease and normal ergometric test were selected as a control group (CG). Plasma proteomic profiling analysis was performed using the top-down (i.e. intact proteins) SELDI-TOF-MS, after processing in a Multiple Affinity Removal Spin Cartridge System (Agilent). Results: Compared with the CG, the 1STEMI group exhibited 510 differentially expressed protein peaks in the first 48 h after the AMI (p < 0.05). The 2STEMI group, had similar to 85% fewer differently expressed protein peaks than those without previous history of AMI (76, p < 0.05). Among the 16 differentially-regulated protein peaks common to both STEMI cohorts (compared with the CG at T(0)), 6 peaks were persistently down-regulated at more than one time-stage, and also were inversed correlated with serum protein markers (cTnI, CK and CKMB) during 48 h-period after IAM. Conclusions: Proteomic analysis by SELDI-TOF-MS technology combined with bioinformatics tools demonstrated differential expression during a 48 h time course suggests a potential role of some of these proteins as biomarkers for the very early stages of AMI, as well as for monitoring early cardiac ischemic recovery. (C) 2011 Elsevier B.V. All rights reserved.