960 resultados para optimal fault recovery


Relevância:

20.00% 20.00%

Publicador:

Resumo:

41 p.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

36 p.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis brings together four papers on optimal resource allocation under uncertainty with capacity constraints. The first is an extension of the Arrow-Debreu contingent claim model to a good subject to supply uncertainty for which delivery capacity has to be chosen before the uncertainty is resolved. The second compares an ex-ante contingent claims market to a dynamic market in which capacity is chosen ex-ante and output and consumption decisions are made ex-post. The third extends the analysis to a storable good subject to random supply. Finally, the fourth examines optimal allocation of water under an appropriative rights system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.

In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Strength at extreme pressures (>1 Mbar or 100 GPa) and high strain rates (106-108 s-1) of materials is not well characterized. The goal of the research outlined in this thesis is to study the strength of tantalum (Ta) at these conditions. The Omega Laser in the Laboratory for Laser Energetics in Rochester, New York is used to create such extreme conditions. Targets are designed with ripples or waves on the surface, and these samples are subjected to high pressures using Omega’s high energy laser beams. In these experiments, the observational parameter is the Richtmyer-Meshkov (RM) instability in the form of ripple growth on single-mode ripples. The experimental platform used for these experiments is the “ride-along” laser compression recovery experiments, which provide a way to recover the specimens having been subjected to high pressures. Six different experiments are performed on the Omega laser using single-mode tantalum targets at different laser energies. The energy indicates the amount of laser energy that impinges the target. For each target, values for growth factor are obtained by comparing the profile of ripples before and after the experiment. With increasing energy, the growth factor increased.

Engineering simulations are used to interpret and correlate the measurements of growth factor to a measure of strength. In order to validate the engineering constitutive model for tantalum, a series of simulations are performed using the code Eureka, based on the Optimal Transportation Meshfree (OTM) method. Two different configurations are studied in the simulations: RM instabilities in single and multimode ripples. Six different simulations are performed for the single ripple configuration of the RM instability experiment, with drives corresponding to laser energies used in the experiments. Each successive simulation is performed at higher drive energy, and it is observed that with increasing energy, the growth factor increases. Overall, there is favorable agreement between the data from the simulations and the experiments. The peak growth factors from the simulations and the experiments are within 10% agreement. For the multimode simulations, the goal is to assist in the design of the laser driven experiments using the Omega laser. A series of three-mode and four-mode patterns are simulated at various energies and the resulting growth of the RM instability is computed. Based on the results of the simulations, a configuration is selected for the multimode experiments. These simulations also serve as validation for the constitutive model and the material parameters for tantalum that are used in the simulations.

By designing samples with initial perturbations in the form of single-mode and multimode ripples and subjecting these samples to high pressures, the Richtmyer-Meshkov instability is investigated in both laser compression experiments and simulations. By correlating the growth of these ripples to measures of strength, a better understanding of the strength of tantalum at high pressures is achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

H. J. Kushner has obtained the differential equation satisfied by the optimal feedback control law for a stochastic control system in which the plant dynamics and observations are perturbed by independent additive Gaussian white noise processes. However, the differentiation includes the first and second functional derivatives and, except for a restricted set of systems, is too complex to solve with present techniques.

This investigation studies the optimal control law for the open loop system and incorporates it in a sub-optimal feedback control law. This suboptimal control law's performance is at least as good as that of the optimal control function and satisfies a differential equation involving only the first functional derivative. The solution of this equation is equivalent to solving two two-point boundary valued integro-partial differential equations. An approximate solution has advantages over the conventional approximate solution of Kushner's equation.

As a result of this study, well known results of deterministic optimal control are deduced from the analysis of optimal open loop control.