840 resultados para Control and Optimization
Resumo:
This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.
Resumo:
This thesis will present strategies for the use of plug-in electric vehicles on smart and microgrids. MATLAB is used as the design tool for all models and simulations. First, a scenario will be explored using the dispatchable loads of electric vehicles to stabilize a microgrid with a high penetration of renewable power generation. Grid components for a microgrid with 50% photovoltaic solar production will be sized through an optimization routine to maintain storage system, load, and vehicle states over a 24-hour period. The findings of this portion are that the dispatchable loads can be used to guard against unpredictable losses in renewable generation output. Second, the use of distributed control strategies for the charging of electric vehicles utilizing an agent-based approach on a smart grid will be studied. The vehicles are regarded as additional loads to a primary forecasted load and use information transfer with the grid to make their charging decisions. Three lightweight control strategies and their effects on the power grid will be presented. The findings are that the charging behavior and peak loads on the grid can be reduced through the use of distributed control strategies.
Resumo:
Traffic safety in rural highways can be considered as a constant source of concern in many countries. Nowadays, transportation professionals widely use Intelligent Transportation Systems (ITS) to address safety issues. However, compared to metropolitan applications, the rural highway (non-urban) ITS applications are still not well defined. This paper provides a comprehensive review on the existing ITS safety solutions for rural highways. This research is mainly focused on the infrastructure-based control and surveillance ITS technology, such as Crash Prevention and Safety, Road Weather Management and other applications, that is directly related to the reduction of frequency and severity of accidents. The main outcome of this research is the development of a ‘ITS control and surveillance device locating model’ to achieve the maximum safety benefit for rural highways. Using cost and benefits databases of ITS, an integer linear programming method is utilized as an optimization technique to choose the most suitable set of ITS devices. Finally, computational analysis is performed on an existing highway in Iran, to validate the effectiveness of the proposed locating model.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.
Resumo:
In this paper, a model-predictive control (MPC) method is detailed for the control of nonlinear systems with stability considerations. It will be assumed that the plant is described by a local input/output ARX-type model, with the control potentially included in the premise variables, which enables the control of systems that are nonlinear in both the state and control input. Additionally, for the case of set point regulation, a suboptimal controller is derived which has the dual purpose of ensuring stability and enabling finite-iteration termination of the iterative procedure used to solve the nonlinear optimization problem that is used to determine the control signal.
Resumo:
One of the problems to be solved in attaining the full potentials of hematopoietic stem cell (HSC) applications is the limited availability of the cells. Growing HSCs in a bioreactor offers an alternative solution to this problem. Besides, it also offers the advantages of eliminating labour intensive process as well as the possible contamination involved in the periodic nutrient replenishments in the traditional T-flask stem cell cultivation. In spite of this, the optimization of HSC cultivation in a bioreactor has been barely explored. This manuscript discusses the development of a mathematical model to describe the dynamics in nutrient distribution and cell concentration of an ex vivo HSC cultivation in a microchannel perfusion bioreactor. The model was further used to optimize the cultivation by proposing three alternative feeding strategies in order to prevent the occurrence of nutrient limitation in the bioreactor. The evaluation of these strategies, the periodic step change increase in the inlet oxygen concentration, the periodic step change increase in the media inflow, and the feedback control of media inflow, shows that these strategies can successfully improve the cell yield of the bioreactor. In general, the developed model is useful for the design and optimization of bioreactor operation.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Theta(opt) bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form d(opt)((P) over bar (t)) x Theta(opt) with d(opt) scaling as (P) over bar (1/eta)(t), where (P) over bar (t) is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then provide a simple characterisation of the optimal operating point.
Resumo:
We consider a joint power control and transmission scheduling problem in wireless networks with average power constraints. While the capacity region of a wireless network is convex, a characterization of this region is a hard problem. We formulate a network utility optimization problem involving time-sharing across different "transmission modes," where each mode corresponds to the set of power levels used in the network. The structure of the optimal solution is a time-sharing across a small set of such modes. We use this structure to develop an efficient heuristic approach to finding a suboptimal solution through column generation iterations. This heuristic approach converges quite fast in simulations, and provides a tool for wireless network planning.
Resumo:
We consider a dense, ad hoc wireless network confined to a small region, such that direct communication is possible between any pair of nodes. The physical communication model is that a receiver decodes the signal from a single transmitter, while treating all other signals as interference. Data packets are sent between source-destination pairs by multihop relaying. We assume that nodes self-organise into a multihop network such that all hops are of length d meters, where d is a design parameter. There is a contention based multiaccess scheme, and it is assumed that every node always has data to send, either originated from it or a transit packet (saturation assumption). In this scenario, we seek to maximize a measure of the transport capacity of the network (measured in bit-meters per second) over power controls (in a fading environment) and over the hop distance d, subject to an average power constraint. We first argue that for a dense collection of nodes confined to a small region, single cell operation is efficient for single user decoding transceivers. Then, operating the dense ad hoc network (described above) as a single cell, we study the optimal hop length and power control that maximizes the transport capacity for a given network power constraint. More specifically, for a fading channel and for a fixed transmission time strategy (akin to the IEEE 802.11 TXOP), we find that there exists an intrinsic aggregate bit rate (Thetaopt bits per second, depending on the contention mechanism and the channel fading characteristics) carried by the network, when operating at the optimal hop length and power control. The optimal transport capacity is of the form dopt(Pmacrt) x Thetaopt with dopt scaling as Pmacrt 1 /eta, where Pmacrt is the available time average transmit power and eta is the path loss exponent. Under certain conditions on the fading distribution, we then pro- - vide a simple characterisation of the optimal operating point.
Resumo:
A hybrid simulation technique for identification and steady state optimization of a tubular reactor used in ammonia synthesis is presented. The parameter identification program finds the catalyst activity factor and certain heat transfer coefficients that minimize the sum of squares of deviation from simulated and actual temperature measurements obtained from an operating plant. The optimization program finds the values of three flows to the reactor to maximize the ammonia yield using the estimated parameter values. Powell's direct method of optimization is used in both cases. The results obtained here are compared with the plant data.
Resumo:
Cobalt ferrite (CoFe2O4) is an engineering material which is used for applications such as magnetic cores, magnetic switches, hyperthermia based tumor treatment, and as contrast agents for magnetic resonance imaging. Utility of ferrites nanoparticles hinges on its size, dispersibility in solutions, and synthetic control over its coercivity. In this work, we establish correlations between room temperature co-precipitation conditions, and these crucial materials parameters. Furthermore, post-synthesis annealing conditions are correlated with morphology, changes in crystal structure and magnetic properties. We disclose the synthesis and process conditions helpful in obtaining easily sinterable CoFe2O4 nanoparticles with coercive magnetic flux density (H-c) in the range 5.5-31.9 kA/m and M-s in the range 47.9-84.9 A.m(2)Kg(-1). At a grain size of similar to 54 +/- 2 nm (corresponding to 1073 K sintering temperature), multi-domain behavior sets in, which is indicated by a decrease in H-c. In addition, we observe an increase in lattice constant with respect to grain size, which is the inverse of what is expected of in ferrites. Our results suggest that oxygen deficiency plays a crucial role in explaining this inverse trend. We expect the method disclosed here to be a viable and scalable alternative to thermal decomposition based CoFe2O4 synthesis. The magnetic trends reported will aid in the optimization of functional CoFe2O4 nanoparticles
Resumo:
Granular crystals are compact periodic assemblies of elastic particles in Hertzian contact whose dynamic response can be tuned from strongly nonlinear to linear by the addition of a static precompression force. This unique feature allows for a wide range of studies that include the investigation of new fundamental nonlinear phenomena in discrete systems such as solitary waves, shock waves, discrete breathers and other defect modes. In the absence of precompression, a particularly interesting property of these systems is their ability to support the formation and propagation of spatially localized soliton-like waves with highly tunable properties. The wealth of parameters one can modify (particle size, geometry and material properties, periodicity of the crystal, presence of a static force, type of excitation, etc.) makes them ideal candidates for the design of new materials for practical applications. This thesis describes several ways to optimally control and tailor the propagation of stress waves in granular crystals through the use of heterogeneities (interstitial defect particles and material heterogeneities) in otherwise perfectly ordered systems. We focus on uncompressed two-dimensional granular crystals with interstitial spherical intruders and composite hexagonal packings and study their dynamic response using a combination of experimental, numerical and analytical techniques. We first investigate the interaction of defect particles with a solitary wave and utilize this fundamental knowledge in the optimal design of novel composite wave guides, shock or vibration absorbers obtained using gradient-based optimization methods.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.