891 resultados para Optimal control problem


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the interpretation of the discrete-time optimal control problem as a scattering process in a discrete medium. We treat the discrete optimal linear regulator, constrained end-point and servo and tracking problems, providing a unified approach to these problems. This approach results in an easy derivation of the desired results as well as several new ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study the asymptotic behavior of an optimal control problem for the time-dependent Kirchhoff-Love plate whose middle surface has a very rough boundary. We identify the limit problem which is an optimal control problem for the limit equation with a different cost functional.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Homogenization and error analysis of an optimal interior control problem in the framework of Stokes' system, on a domain with rapidly oscillating boundary, are the subject matters of this article. We consider a three dimensional domain constituted of a parallelepiped with a large number of rectangular cylinders at the top of it. An interior control is applied in a proper subdomain of the parallelepiped, away from the oscillating volume. We consider two types of functionals, namely a functional involving the L-2-norm of the state variable and another one involving its H-1-norm. The asymptotic analysis of optimality systems for both cases, when the cross sectional area of the rectangular cylinders tends to zero, is done here. Our major contribution is to derive error estimates for the state, the co-state and the associated pressures, in appropriate functional spaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the optimal control problem of a very general stochastic hybrid system with both autonomous and impulsive jumps. The planning horizon is infinite and we use the discounted-cost criterion for performance evaluation. Under certain assumptions, we show the existence of an optimal control. We then derive the quasivariational inequalities satisfied by the value function and establish well-posedness. Finally, we prove the usual verification theorem of dynamic programming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The existence of an optimal feedback law is established for the risk-sensitive optimal control problem with denumerable state space. The main assumptions imposed are irreducibility and a near monotonicity condition on the one-step cost function. A solution can be found constructively using either value iteration or policy iteration under suitable conditions on initial feedback law.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information spreading in a population can be modeled as an epidemic. Campaigners (e.g., election campaign managers, companies marketing products or movies) are interested in spreading a message by a given deadline, using limited resources. In this paper, we formulate the above situation as an optimal control problem and the solution (using Pontryagin's Maximum Principle) prescribes an optimal resource allocation over the time of the campaign. We consider two different scenarios-in the first, the campaigner can adjust a direct control (over time) which allows her to recruit individuals from the population (at some cost) to act as spreaders for the Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we allow the campaigner to adjust the effective spreading rate by incentivizing the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to the direct recruitment. We consider time varying information spreading rate in our formulation to model the changing interest level of individuals in the campaign, as the deadline is reached. In both the cases, we show the existence of a solution and its uniqueness for sufficiently small campaign deadlines. For the fixed spreading rate, we show the effectiveness of the optimal control strategy against the constant control strategy, a heuristic control strategy and no control. We show the sensitivity of the optimal control to the spreading rate profile when it is time varying. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the growth behavior of microorganisms using modeling and optimization techniques is an active area of research in the fields of biochemical engineering and systems biology. In this paper, we propose a general modeling framework, based on Monad model, to model the growth of microorganisms. Utilizing the general framework, we formulate an optimal control problem with the objective of maximizing a long-term cellular goal and solve it analytically under various constraints for the growth of microorganisms in a two substrate batch environment. We investigate the relation between long term and short term cellular goals and show that the objective of maximizing cellular concentration at a fixed final time is equivalent to maximization of instantaneous growth rate. We then establish the mathematical connection between the generalized framework and optimal and cybernetic modeling frameworks and derive generalized governing dynamic equations for optimal and cybernetic models. We finally illustrate the influence of various constraints in the cybernetic modeling framework on the optimal growth behavior of microorganisms by solving several dynamic optimization problems using genetic algorithms. (C) 2014 Published by Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the optimal control problem of maximizing the spread of an information epidemic on a social network. Information propagation is modeled as a susceptible-infected (SI) process, and the campaign budget is fixed. Direct recruitment and word-of-mouth incentives are the two strategies to accelerate information spreading (controls). We allow for multiple controls depending on the degree of the nodes/individuals. The solution optimally allocates the scarce resource over the campaign duration and the degree class groups. We study the impact of the degree distribution of the network on the controls and present results for Erdos-Renyi and scale-free networks. Results show that more resource is allocated to high-degree nodes in the case of scale-free networks, but medium-degree nodes in the case of Erdos-Renyi networks. We study the effects of various model parameters on the optimal strategy and quantify the improvement offered by the optimal strategy over the static and bang-bang control strategies. The effect of the time-varying spreading rate on the controls is explored as the interest level of the population in the subject of the campaign may change over time. We show the existence of a solution to the formulated optimal control problem, which has nonlinear isoperimetric constraints, using novel techniques that is general and can be used in other similar optimal control problems. This work may be of interest to political, social awareness, or crowdfunding campaigners and product marketing managers, and with some modifications may be used for mitigating biological epidemics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the use of Monte Carlo techniques in deterministic nonlinear optimal control. Inter-dimensional population Markov Chain Monte Carlo (MCMC) techniques are proposed to solve the nonlinear optimal control problem. The linear quadratic and Acrobot problems are studied to demonstrate the successful application of the relevant techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.