923 resultados para Existence of optimal controls
Resumo:
Three-dimensional numerical study of natural convection in a vertical channel with flush-mounted discrete heaters on opposite conductive substrate walls is carried out in the present work. Detailed flow and heat transfer characteristics are presented for various Grashof numbers. The heat transfer effects on one wall by the presence of heaters on its opposite wall is examined. It is found that heat transfer rates on one wall are increased by the presence of heaters on its opposite wall. The thermal boundary layers on the opposite walls complement each other for enhanced heat transfer. The effects of spacing between the heated walls, spacings between heaters and substrate conductivity on flow and heat transfer are examined. Existence of optimum spacings between the heated walls for maximum heat transfer and mass flow are observed. It is found that the heat transfer and fluid flow do not follow the same optimum spacings. Mass flow rate reaches maximum value at a wall spacing greater than the spacing for maximum heat transfer. This is because the interaction of thermal boundary layers on individual walls ceases at a lower spacing before the velocity boundary layers separate each other. It is found that increased spacings between heaters reduce individual heater temperatures provided the heaters close to exit on both substrates avail sufficient substrate potions on the exit side. Insufficient substrate portions between the exit heaters and the exit cause abnormal local temperature rise in the exit heaters which are the hottest ones among all the heaters. Optimal heater spacings exist for minimum hottest heater temperature rise. Correlations are presented for dimensionless mass flow rate, temperature maximum, and average Nusselt number.
Resumo:
We consider a power optimization problem with average delay constraint on the downlink of a Green Base-station. A Green Base-station is powered by both renewable energy such as solar or wind energy as well as conventional sources like diesel generators or the power grid. We try to minimize the energy drawn from conventional energy sources and utilize the harvested energy to the maximum extent. Each user also has an average delay constraint for its data. The optimal action consists of scheduling the users and allocating the optimal transmission rate for the chosen user. In this paper, we formulate the problem as a Markov Decision Problem and show the existence of a stationary average-cost optimal policy. We also derive some structural results for the optimal policy.
Resumo:
The ubiquity of the power law relationship between dQ/dt and Q for recession periods (-dQ/dt kQ(alpha); Q being discharge at the basin outlet at time t) clearly hints at the existence of a dominant recession flow process that is common to all real basins. It is commonly assumed that a basin, during recession events, functions as a single phreatic aquifer resting on a impermeable horizontal bed or the Dupuit-Boussinesq (DB) aquifer, and with time different aquifer geometric conditions arise that give different values of alpha and k. The recently proposed alternative model, geomorphological recession flow model, however, suggests that recession flows are controlled primarily by the dynamics of the active drainage network (ADN). In this study we use data for several basins and compare the above two contrasting recession flow models in order to understand which of the above two factors dominates during recession periods in steep basins. Particularly, we do the comparison by selecting three key recession flow properties: (1) power law exponent alpha, (2) dynamic dQ/dt-Q relationship (characterized by k) and (3) recession timescale (time period for which a recession event lasts). Our observations suggest that neither drainage from phreatic aquifers nor evapotranspiration significantly controls recession flows. Results show that the value of a and recession timescale are not modeled well by DB aquifer model. However, the above mentioned three recession curve properties can be captured satisfactorily by considering the dynamics of the ADN as described by geomorphological recession flow model, possibly indicating that the ADN represents not just phreatic aquifers but the organization of various sub-surface storage systems within the basin. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Using a realistic nonlinear mathematical model for melanoma dynamics and the technique of optimal dynamic inversion (exact feedback linearization with static optimization), a multimodal automatic drug dosage strategy is proposed in this paper for complete regression of melanoma cancer in humans. The proposed strategy computes different drug dosages and gives a nonlinear state feedback solution for driving the number of cancer cells to zero. However, it is observed that when tumor is regressed to certain value, then there is no need of external drug dosages as immune system and other therapeutic states are able to regress tumor at a sufficiently fast rate which is more than exponential rate. As model has three different drug dosages, after applying dynamic inversion philosophy, drug dosages can be selected in optimized manner without crossing their toxicity limits. The combination of drug dosages is decided by appropriately selecting the control design parameter values based on physical constraints. The process is automated for all possible combinations of the chemotherapy and immunotherapy drug dosages with preferential emphasis of having maximum possible variety of drug inputs at any given point of time. Simulation study with a standard patient model shows that tumor cells are regressed from 2 x 107 to order of 105 cells because of external drug dosages in 36.93 days. After this no external drug dosages are required as immune system and other therapeutic states are able to regress tumor at greater than exponential rate and hence, tumor goes to zero (less than 0.01) in 48.77 days and healthy immune system of the patient is restored. Study with different chemotherapy drug resistance value is also carried out. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We study risk-sensitive control of continuous time Markov chains taking values in discrete state space. We study both finite and infinite horizon problems. In the finite horizon problem we characterize the value function via Hamilton Jacobi Bellman equation and obtain an optimal Markov control. We do the same for infinite horizon discounted cost case. In the infinite horizon average cost case we establish the existence of an optimal stationary control under certain Lyapunov condition. We also develop a policy iteration algorithm for finding an optimal control.
Resumo:
In this article, we study risk-sensitive control problem with controlled continuous time Markov chain state dynamics. Using multiplicative dynamic programming principle along with the atomic structure of the state dynamics, we prove the existence and a characterization of optimal risk-sensitive control under geometric ergodicity of the state dynamics along with a smallness condition on the running cost.
Resumo:
This paper is focused on the study of the important property of the asymptotic hyperstability of a class of continuous-time dynamic systems. The presence of a parallel connection of a strictly stable subsystem to an asymptotically hyperstable one in the feed-forward loop is allowed while it has also admitted the generation of a finite or infinite number of impulsive control actions which can be combined with a general form of nonimpulsive controls. The asymptotic hyperstability property is guaranteed under a set of sufficiency-type conditions for the impulsive controls.
Resumo:
There is a growing amount of experimental evidence that suggests people often deviate from the predictions of game theory. Some scholars attempt to explain the observations by introducing errors into behavioral models. However, most of these modifications are situation dependent and do not generalize. A new theory, called the rational novice model, is introduced as an attempt to provide a general theory that takes account of erroneous behavior. The rational novice model is based on two central principals. The first is that people systematically make inaccurate guesses when they are evaluating their options in a game-like situation. The second is that people treat their decisions similar to a portfolio problem. As a result, non optimal actions in a game theoretic sense may be included in the rational novice strategy profile with positive weights.
The rational novice model can be divided into two parts: the behavioral model and the equilibrium concept. In a theoretical chapter, the mathematics of the behavioral model and the equilibrium concept are introduced. The existence of the equilibrium is established. In addition, the Nash equilibrium is shown to be a special case of the rational novice equilibrium. In another chapter, the rational novice model is applied to a voluntary contribution game. Numerical methods were used to obtain the solution. The model is estimated with data obtained from the Palfrey and Prisbrey experimental study of the voluntary contribution game. It is found that the rational novice model explains the data better than the Nash model. Although a formal statistical test was not used, pseudo R^2 analysis indicates that the rational novice model is better than a Probit model similar to the one used in the Palfrey and Prisbrey study.
The rational novice model is also applied to a first price sealed bid auction. Again, computing techniques were used to obtain a numerical solution. The data obtained from the Chen and Plott study were used to estimate the model. The rational novice model outperforms the CRRAM, the primary Nash model studied in the Chen and Plott study. However, the rational novice model is not the best amongst all models. A sophisticated rule-of-thumb, called the SOPAM, offers the best explanation of the data.
Resumo:
The effect of the mixing of pulsed two color fields on the generation of an isolated attosecond pulse has been systematically investigated. One main color is 800 nm and the other color (or secondary color) is varied from 1.2 to 2.4 mu m. This work shows that the continuum length behaves in a similar way to the behavior of the difference in the square of the amplitude of the strongest and next strongest cycle. As the mixing ratio is increased, the optimal wavelength for the extended continuum shifts toward shorter wavelength side. There is a certain mixing ratio of intensities at which the continuum length bifurcates, i.e., the existence of two optimal wavelengths. As the mixing ratio is further increased, each branch bifurcates again into two sub-branches. This 2D map analysis of the mixing ratio and the wavelength of the secondary field easily allows one to select a proper wavelength and the mixing ratio for a given pulse duration of the primary field. The study shows that an isolated sub-100 attosecond pulse can be generated mixing an 11 fs full-width-half-maximum (FWHM), 800 laser pulse with an 1840 nm FWHM pulse. Furthermore the result reveals that a 33 fs FWHM, 800 nm pulse can produce an isolated pulse below 200 as, when properly mixed. (c) 2008 Optical Society of America.
Resumo:
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.
Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.
In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.
This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.
The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.
Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.
Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.
Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
On a daily basis, humans interact with a vast range of objects and tools. A class of tasks, which can pose a serious challenge to our motor skills, are those that involve manipulating objects with internal degrees of freedom, such as when folding laundry or using a lasso. Here, we use the framework of optimal feedback control to make predictions of how humans should interact with such objects. We confirm the predictions experimentally in a two-dimensional object manipulation task, in which subjects learned to control six different objects with complex dynamics. We show that the non-intuitive behavior observed when controlling objects with internal degrees of freedom can be accounted for by a simple cost function representing a trade-off between effort and accuracy. In addition to using a simple linear, point-mass optimal control model, we also used an optimal control model, which considers the non-linear dynamics of the human arm. We find that the more realistic optimal control model captures aspects of the data that cannot be accounted for by the linear model or other previous theories of motor control. The results suggest that our everyday interactions with objects can be understood by optimality principles and advocate the use of more realistic optimal control models for the study of human motor neuroscience.
Resumo:
We point out the use of a wrong definition for conversion efficiency in the literature and analyze the effects of the waveguide length and pump power on conversion efficiency according to the correct definition. The existence of the locally optimal waveguide length and pump power is demonstrated theoretically and experimentally. Further analysis shows that the extremum of conversion efficiency can be achieved by global optimization of the waveguide length and pump power simultaneously, which is limited by just the linear propagation loss and the effective carrier lifetime. (C) 2009 Optical Society of America
Resumo:
This paper discusses the Klein–Gordon–Zakharov system with different-degree nonlinearities in two and three space dimensions. Firstly, we prove the existence of standing wave with ground state by applying an intricate variational argument. Next, by introducing an auxiliary functional and an equivalent minimization problem, we obtain two invariant manifolds under the solution flow generated by the Cauchy problem to the aforementioned Klein–Gordon–Zakharov system. Furthermore, by constructing a type of constrained variational problem, utilizing the above two invariant manifolds as well as applying potential well argument and concavity method, we derive a sharp threshold for global existence and blowup. Then, combining the above results, we obtain two conclusions of how small the initial data are for the solution to exist globally by using dilation transformation. Finally, we prove a modified instability of standing wave to the system under study.
Resumo:
Laminaria japonica, Undaria pinnatifida, Ulva lactuca, Grateloupia turuturu and Palmaria palmata are Suitable species that fit the requirements of a seaweed-animal integrated aquaculture system in terms of their viable biomass, rapid growth and promising nutrient uptake rates. fit this investigation, the responses of the optimal chlorophyll fluorescence yield of the five algal species in tumble Culture were assessed at a temperature range of 10 similar to 30 degrees C. The results revealed that Ulva lactuca was the most resistant species to high temperature, withstanding 30 degrees C for 4 h without apparent decline in the optimal chlorophyll fluorescence yield. While the arctic alga Palmaria palmata was the most vulnerable one, showing significant decline in the optimal chlorophyll fluorescence yield at 25 degrees C for 2 h. The cold-water species Laminaria japonica, however, demonstrated strong ability to cope with higher temperature (24 similar to 26 degrees C) for shorter time (within 24 h) without significant decline in the optimal chlorophyll fluorescence yield. Grateloupia turuturu showed a general decrease in the optimal chlorophyll fluorescence yield with the rising temperature from 23 to 30 degrees C, similar to the temperate kelp Undaria pinnatifida. Changes of chlorophyll fluorescence yields of these algae were characterized differently indicating the existence of species-unique strategy to cope with high light. Measurements of the optimal chlorophyll fluorescence yield after short exposure to direct solar irradiance revealed how long these exposures could be without significant photoinhibition or with promising recovery in photosynthetic activities. Seasonal pattern of alternation of algal species in tank culture in the Northern Hemisphere at the latitude of 36 degrees N was proposed according to these basic measurements.