932 resultados para Optimal control design


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An optimal control problem in a two-dimensional domain with a rapidly oscillating boundary is considered. The main features of this article are on two points, namely, we consider periodic controls in the thin periodic slabs of period epsilon > 0, a small parameter, and height O(1) in the oscillatory part, and the controls are characterized using unfolding operators. We then do a homogenization analysis of the optimal control problems as epsilon -> 0 with L-2 as well as Dirichlet (gradient-type) cost functionals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper is to explain under which circumstances using TACs as instrument to manage a fishery along with fishing periods may be interesting from a regulatory point of view. In order to do this, the deterministic analysis of Homans and Wilen (1997)and Anderson (2000) is extended to a stochastic scenario where the resource cannot be measured accurately. The resulting endogenous stochastic model is numerically solved for finding the optimal control rules in the Iberian sardine stock. Three relevant conclusions can be highligted from simulations. First, the higher the uncertainty about the state of the stock is, the lower the probability of closing the fishery is. Second, the use of TACs as management instrument in fisheries already regulated with fishing periods leads to: i) An increase of the optimal season length and harvests, especially for medium and high number of licences, ii) An improvement of the biological and economic variables when the size of the fleet is large; and iii) Eliminate the extinction risk for the resource. And third, the regulator would rather select the number of licences and do not restrict the season length.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[ES]El objetivo principal de esta tesis de máster es el estudio del comportamiento térmico del instrumento TriboLAB durante su estancia en la Estación Espacial Internacional, junto con la comparación de dicho comportamiento con el pronosticado por los modelos térmicos matemáticos empleados en el diseño de su sistema de control térmico. El trabajo realizado ha permitido profundizar de forma importante en el conocimiento del mencionado comportamiento. Ello permitirá poner a disposición de otros experimentadores interesados en ubicar sus instrumentos en los balcones exteriores de la Estación Espacial Internacional, información real acerca del comportamiento térmico de un equipo de las características del TriboLAB en dichas condiciones. Información de gran interés para ser empleada en el diseño del control térmico de sus instrumentos, especialmente ahora que la vida útil de la Estación Espacial Internacional ha sido prorrogada hasta 2020. El control térmico de los equipos espaciales es un aspecto clave para asegurar su supervivencia y correcto funcionamiento bajo las extremas condiciones existentes en el espacio. Su misión es mantener los distintos componentes dentro de su rango de temperaturas admisibles, puesto que en caso contrario no podrían funcionar o incluso ni siquiera sobrevivir más allá de esas temperaturas. Adicionalmente ha sido posible comprobar la aplicabilidad de distintas técnicas de análisis de datos funcionales en lo que respecta al estudio del tipo de datos aquí contemplado. Así mismo, se han comparado los resultados de la campaña de ensayos térmicos con los modelos térmicos matemáticos que han guiado el diseño del control térmico, y que son una pieza fundamental en el diseño del control térmico de cualquier instrumento espacial. Ello ha permitido verificar tanto la validez del sistema de control térmico diseñado para el TriboLAB como con la adecuada similitud existente entre los resultados de los modelos térmicos matemáticos y las temperaturas registradas en el equipo. Todo ello, ha sido realizado desde la perspectiva del análisis de datos funcionales.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Unremitting waves and occasional storms bring dynamic forces to bear on the coast. Sediment flux results in various patterns of erosion and accretion, with an overwhelming majority (80 to 90 percent) of coastline in the eastern U.S. exhibiting net erosion in recent decades. Climate change threatens to increase the intensity of storms and raise sea level 18 to 59 centimeters over the next century. Following a lengthy tradition of economic models for natural resource management, this paper provides a dynamic optimization model for managing coastal erosion and explores the types of data necessary to employ the model for normative policy analysis. The model conceptualizes benefits of beach and dune sediments as service flows accruing to nearby residential property owners, local businesses, recreational beach users, and perhaps others. Benefits can also include improvements in habitat for beach- and dune-dependent plant and animal species. The costs of maintaining beach sediment in the presence of coastal erosion include expenditures on dredging, pumping, and placing sand on the beach to maintain width and height. Other costs can include negative impacts on the nearshore environment. Employing these constructs, an optimal control model is specified that provides a framework for identifying the conditions under which beach replenishment enhances economic welfare and an optimal schedule for replenishment can be derived under a constant sea level and erosion rate (short term) as well as an increasing sea level and erosion rate (long term). Under some simplifying assumptions, the conceptual framework can examine the time horizon of management responses under sea level rise, identifying the timing of shift to passive management (shoreline retreat) and exploring factors that influence this potential shift. (PDF contains 4 pages)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose a simple method of characterizing countervailing incentives in adverse selection problems. The key element in our characterization consists of analyzing properties of the full information problem. This allows solving the principal problem without using optimal control theory. Our methodology can also be applied to different economic settings: health economics, monopoly regulation, labour contracts, limited liabilities and environmental regulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste trabalho apresentamos as etapas para a utilização do método da Programação Dinâmica, ou Princípio de Otimização de Bellman, para aplicações de controle ótimo. Investigamos a noção de funções de controle de Lyapunov (FCL) e sua relação com a estabilidade de sistemas autônomos com controle. Uma função de controle de Lyapunov deverá satisfazer a equação de Hamilton-Jacobi-Bellman (H-J-B). Usando esse fato, se uma função de controle de Lyapunov é conhecida, será então possível determinar a lei de realimentação ótima; isto é, a lei de controle que torna o sistema globalmente assintóticamente controlável a um estado de equilíbrio. Como aplicação, apresentamos uma modelagem matemática adequada a um problema de controle ótimo de certos sistemas biológicos. Este trabalho conta também com um breve histórico sobre o desenvolvimento da Teoria de Controle de forma a ilustrar a importância, o progresso e a aplicação das técnicas de controle em diferentes áreas ao longo do tempo.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A presente dissertação propõe uma abordagem alternativa na simulação matemática de um cenário preocupante em ecologia: o controle de pragas nocivas a uma dada lavoura de soja em uma específica região geográfica. O instrumental teórico empregado é a teoria dos jogos, de forma a acoplar ferramentas da matemática discreta à análise e solução de problemas de valor inicial em equações diferenciais, mais especificamente, as chamadas equações de dinâmica populacional de Lotka-Volterra com competição. Essas equações, que modelam o comportamento predador-presa, possuem, com os parâmetros inicialmente utilizados, um ponto de equilíbrio mais alto que o desejado no contexto agrícola sob exame, resultando na necessidade de utilização da teoria do controle ótimo. O esquema desenvolvido neste trabalho conduz a ferramentas suficientemente simples, de forma a tornar viável o seu uso em situações reais. Os dados utilizados para o tratamento do problema que conduziu a esta pesquisa interdisciplinar foram coletados de material bibliográfico da Empresa Brasileira de Pesquisa Agropecuária EMBRAPA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho aborda o problema de identificação de danos em uma estrutura a partir de sua resposta impulsiva. No modelo adotado, a integridade estrutural é continuamente descrita por um parâmetro de coesão. Sendo assim, o Modelo de Elementos Finitos (MEF) é utilizado para discretizar tanto o campo de deslocamentos, quanto o campo de coesão. O problema de identificação de danos é, então, definido como um problema de otimização, cujo objetivo é minimizar, em relação a um vetor de parâmetros nodais de coesão, um funcional definido a partir da diferença entre a resposta impulsiva experimental e a correspondente resposta prevista por um MEF da estrutura. A identificação de danos estruturais baseadas no domínio do tempo apresenta como vantagens a aplicabilidade em sistemas lineares e/ou com elevados níveis de amortecimento, além de apresentar uma elevada sensibilidade à presença de pequenos danos. Estudos numéricos foram realizados considerando-se um modelo de viga de Euler-Bernoulli simplesmente apoiada. Para a determinação do posicionamento ótimo do sensor de deslocamento e do número de pontos da resposta impulsiva, a serem utilizados no processo de identificação de danos, foi considerado o Projeto Ótimo de Experimentos. A posição do sensor e o número de pontos foram determinados segundo o critério D-ótimo. Outros critérios complementares foram também analisados. Uma análise da sensibilidade foi realizada com o intuito de identificar as regiões da estrutura onde a resposta é mais sensível à presença de um dano em um estágio inicial. Para a resolução do problema inverso de identificação de danos foram considerados os métodos de otimização Evolução Diferencial e Levenberg-Marquardt. Simulações numéricas, considerando-se dados corrompidos com ruído aditivo, foram realizadas com o intuito de avaliar a potencialidade da metodologia de identificação de danos, assim como a influência da posição do sensor e do número de dados considerados no processo de identificação. Com os resultados obtidos, percebe-se que o Projeto Ótimo de Experimentos é de fundamental importância para a identificação de danos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste trabalho, será considerado um problema de controle ótimo quadrático para a equação do calor em domínios retangulares com condição de fronteira do tipo Dirichlet é nos quais, a função de controle (dependente apenas no tempo) constitui um termo de fonte. Uma caracterização da solução ótima é obtida na forma de uma equação linear em um espaço de funções reais definidas no intervalo de tempo considerado. Em seguida, utiliza-se uma sequência de projeções em subespaços de dimensão finita para obter aproximações para o controle ótimo, o cada uma das quais pode ser gerada por um sistema linear de dimensão finita. A sequência de soluções aproximadas assim obtidas converge para a solução ótima do problema original. Finalmente, são apresentados resultados numéricos para domínios espaciais de dimensão 1.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we examine triggering in a simple linearly-stable thermoacoustic system using techniques from flow instability and optimal control. Firstly, for a noiseless system, we find the initial states that have highest energy growth over given times and from given energies. Secondly, by varying the initial energy, we find the lowest energy that just triggers to a stable periodic solution. We show that the corresponding initial state grows first towards an unstable periodic solution and, from there, to the stable periodic solution. This exploits linear transient growth, which arises due to nonnormality in the governing equations and is directly analogous to bypass transition to turbulence. Thirdly, we introduce noise that has similar spectral characteristics to this initial state. We show that, when triggering from low noise levels, the system grows to high amplitude self-sustained oscillations by first growing towards the unstable periodic solution of the noiseless system. This helps to explain the experimental observation that linearly-stable systems can trigger to self-sustained oscillations even with low background noise. © 2010 by University of Cambridge. Published by the American Institute of Aeronautics and Astronautics, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mitochondrial disease currently received an increasing concern. However, the case-control design commonly adopted in this field is vulnerable to genetic background, population stratification and poor data quality. Although the phylogenetic analysis could