9 resultados para Robust estimates

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We aim to characterize fault slip behavior during all stages of the seismic cycle in subduction megathrust environments with the eventual goal of understanding temporal and spatial variations of fault zone rheology, and to infer possible causal relationships between inter-, co- and post-seismic slip, as well as implications for earthquake and tsunami hazard. In particular we focus on analyzing aseismic deformation occurring during inter-seismic and post-seismic periods of the seismic cycle. We approach the problem using both Bayesian and optimization techniques. The Bayesian approach allows us to completely characterize the model parameter space by searching a posteriori estimates of the range of allowable models, to easily implement any kind of physically plausible a priori information and to perform the inversion without regularization other than that imposed by the parameterization of the model. However, the Bayesian approach computational expensive and not currently viable for quick response scenarios. Therefore, we also pursue improvements in the optimization inference scheme. We present a novel, robust and yet simple regularization technique that allows us to infer robust and somewhat more detailed models of slip on faults. We apply such methodologies, using simple quasi-static elastic models, to perform studies of inter- seismic deformation in the Central Andes subduction zone, and post-seismic deformation induced by the occurrence of the 2011 Mw 9.0 Tohoku-Oki earthquake in Japan. For the Central Andes, we present estimates of apparent coupling probability of the subduction interface and analyze its relationship to past earthquakes in the region. For Japan, we infer high spatial variability in material properties of the megathrust offshore Tohoku. We discuss the potential for a large earthquake just south of the Tohoku-Oki earthquake where our inferences suggest dominantly aseismic behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two most important digital-system design goals today are to reduce power consumption and to increase reliability. Reductions in power consumption improve battery life in the mobile space and reductions in energy lower operating costs in the datacenter. Increased robustness and reliability shorten down time, improve yield, and are invaluable in the context of safety-critical systems. While optimizing towards these two goals is important at all design levels, optimizations at the circuit level have the furthest reaching effects; they apply to all digital systems. This dissertation presents a study of robust minimum-energy digital circuit design and analysis. It introduces new device models, metrics, and methods of calculation—all necessary first steps towards building better systems—and demonstrates how to apply these techniques. It analyzes a fabricated chip (a full-custom QDI microcontroller designed at Caltech and taped-out in 40-nm silicon) by calculating the minimum energy operating point and quantifying the chip’s robustness in the face of both timing and functional failures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.

Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.

I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work is concerned with estimating the upper envelopes S* of the absolute values of the partial sums of rearranged trigonometric sums. A.M. Garsia [Annals of Math. 79 (1964), 634-9] gave an estimate for the L2 norms of the S*, averaged over all rearrangements of the original (finite) sum. This estimate enabled him to prove that the Fourier series of any function in L2 can be rearranged so that it converges a.e. The main result of this thesis is a similar estimate of the Lq norms of the S*, for all even integers q. This holds for finite linear combinations of functions which satisfy a condition which is a generalization of orthonormality in the L2 case. This estimate for finite sums is extended to Fourier series of Lq functions; it is shown that there are functions to which the Men’shov-Paley Theorem does not apply, but whose Fourier series can nevertheless be rearranged so that the S* of the rearranged series is in Lq.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.

A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.

The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?

To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.

To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem motivating this investigation is that of pure axisymmetric torsion of an elastic shell of revolution. The analysis is carried out within the framework of the three-dimensional linear theory of elastic equilibrium for homogeneous, isotropic solids. The objective is the rigorous estimation of errors involved in the use of approximations based on thin shell theory.

The underlying boundary value problem is one of Neumann type for a second order elliptic operator. A systematic procedure for constructing pointwise estimates for the solution and its first derivatives is given for a general class of second-order elliptic boundary-value problems which includes the torsion problem as a special case.

The method used here rests on the construction of “energy inequalities” and on the subsequent deduction of pointwise estimates from the energy inequalities. This method removes certain drawbacks characteristic of pointwise estimates derived in some investigations of related areas.

Special interest is directed towards thin shells of constant thickness. The method enables us to estimate the error involved in a stress analysis in which the exact solution is replaced by an approximate one, and thus provides us with a means of assessing the quality of approximate solutions for axisymmetric torsion of thin shells.

Finally, the results of the present study are applied to the stress analysis of a circular cylindrical shell, and the quality of stress estimates derived here and those from a previous related publication are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".