6 resultados para OPTIMAL ESTIMATES OF STABILITY REGION

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An area of about 25 square miles in the western part of the San Gabriel Mountains was mapped on a scale of 1000 feet to the inch. Special attention was given to the structural geology, particularly the relations between the different systems of faults, of which the San Gabriel fault system and the Sierra Madre fault system are the most important ones. The present distribution and relations of the rocks suggests that the southern block has tilted northward against a more stable mass of old rocks which was raised up during a Pliocene or post-Pliocene orogeny. It is suggested that this northward tilting of the block resulted in the group of thrust faults which comprise the Sierra Madre fault system. It is show that this hypothesis fits the present distribution of the rocks and occupies a logical place in the geologic history of the region as well or better than any other hypothesis previously offered to explain the geology of the region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.

Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.

I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".