15 resultados para Quit Attempt Methods
em CaltechTHESIS
Resumo:
Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.
In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.
Resumo:
A means of assessing the effectiveness of methods used in the numerical solution of various linear ill-posed problems is outlined. Two methods: Tikhonov' s method of regularization and the quasireversibility method of Lattès and Lions are appraised from this point of view.
In the former method, Tikhonov provides a useful means for incorporating a constraint into numerical algorithms. The analysis suggests that the approach can be generalized to embody constraints other than those employed by Tikhonov. This is effected and the general "T-method" is the result.
A T-method is used on an extended version of the backwards heat equation with spatially variable coefficients. Numerical computations based upon it are performed.
The statistical method developed by Franklin is shown to have an interpretation as a T-method. This interpretation, although somewhat loose, does explain some empirical convergence properties which are difficult to pin down via a purely statistical argument.
Resumo:
There is a growing amount of experimental evidence that suggests people often deviate from the predictions of game theory. Some scholars attempt to explain the observations by introducing errors into behavioral models. However, most of these modifications are situation dependent and do not generalize. A new theory, called the rational novice model, is introduced as an attempt to provide a general theory that takes account of erroneous behavior. The rational novice model is based on two central principals. The first is that people systematically make inaccurate guesses when they are evaluating their options in a game-like situation. The second is that people treat their decisions similar to a portfolio problem. As a result, non optimal actions in a game theoretic sense may be included in the rational novice strategy profile with positive weights.
The rational novice model can be divided into two parts: the behavioral model and the equilibrium concept. In a theoretical chapter, the mathematics of the behavioral model and the equilibrium concept are introduced. The existence of the equilibrium is established. In addition, the Nash equilibrium is shown to be a special case of the rational novice equilibrium. In another chapter, the rational novice model is applied to a voluntary contribution game. Numerical methods were used to obtain the solution. The model is estimated with data obtained from the Palfrey and Prisbrey experimental study of the voluntary contribution game. It is found that the rational novice model explains the data better than the Nash model. Although a formal statistical test was not used, pseudo R^2 analysis indicates that the rational novice model is better than a Probit model similar to the one used in the Palfrey and Prisbrey study.
The rational novice model is also applied to a first price sealed bid auction. Again, computing techniques were used to obtain a numerical solution. The data obtained from the Chen and Plott study were used to estimate the model. The rational novice model outperforms the CRRAM, the primary Nash model studied in the Chen and Plott study. However, the rational novice model is not the best amongst all models. A sophisticated rule-of-thumb, called the SOPAM, offers the best explanation of the data.
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.
The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.
The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.
Resumo:
Homologous recombination is a source of diversity in both natural and directed evolution. Standing genetic variation that has passed the test of natural selection is combined in new ways, generating functional and sometimes unexpected changes. In this work we evaluate the utility of homologous recombination as a protein engineering tool, both in comparison with and combined with other protein engineering techniques, and apply it to an industrially important enzyme: Hypocrea jecorina Cel5a.
Chapter 1 reviews work over the last five years on protein engineering by recombination. Chapter 2 describes the recombination of Hypocrea jecorina Cel5a endoglucanase with homologous enzymes in order to improve its activity at high temperatures. A chimeric Cel5a that is 10.1 °C more stable than wild-type and hydrolyzes 25% more cellulose at elevated temperatures is reported. Chapter 3 describes an investigation into the synergy of thermostable cellulases that have been engineered by recombination and other methods. An engineered endoglucanase and two engineered cellobiohydrolases synergistically hydrolyzed cellulose at high temperatures, releasing over 200% more reducing sugars over 60 h at their optimal mixture relative to the best mixture of wild-type enzymes. These results provide a framework for engineering cellulolytic enzyme mixtures for the industrial conditions of high temperatures and long incubation times.
In addition to this work on recombination, we explored three other problems in protein engineering. Chapter 4 describes an investigation into replacing enzymes with complex cofactors with simple cofactors, using an E. coli enolase as a model system. Chapter 5 describes engineering broad-spectrum aldehyde resistance in Saccharomyces cerevisiae by evolving an alcohol dehydrogenase simultaneously for activity and promiscuity. Chapter 6 describes an attempt to engineer gene-targeted hypermutagenesis into E. coli to facilitate continuous in vivo selection systems.
Resumo:
Preface: The main goal of this work is to give an introductory account of sieve methods that would be understandable with only a slight knowledge of analytic number theory. These notes are based to a large extent on lectures on sieve methods given by Professor Van Lint and the author in a number theory seminar during the 1970-71 academic year, but rather extensive changes have been made in both the content and the presentation...
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.
For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.
For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.
For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.
This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.
This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.
The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.
The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
In this work we chiefly deal with two broad classes of problems in computational materials science, determining the doping mechanism in a semiconductor and developing an extreme condition equation of state. While solving certain aspects of these questions is well-trodden ground, both require extending the reach of existing methods to fully answer them. Here we choose to build upon the framework of density functional theory (DFT) which provides an efficient means to investigate a system from a quantum mechanics description.
Zinc Phosphide (Zn3P2) could be the basis for cheap and highly efficient solar cells. Its use in this regard is limited by the difficulty in n-type doping the material. In an effort to understand the mechanism behind this, the energetics and electronic structure of intrinsic point defects in zinc phosphide are studied using generalized Kohn-Sham theory and utilizing the Heyd, Scuseria, and Ernzerhof (HSE) hybrid functional for exchange and correlation. Novel 'perturbation extrapolation' is utilized to extend the use of the computationally expensive HSE functional to this large-scale defect system. According to calculations, the formation energy of charged phosphorus interstitial defects are very low in n-type Zn3P2 and act as 'electron sinks', nullifying the desired doping and lowering the fermi-level back towards the p-type regime. Going forward, this insight provides clues to fabricating useful zinc phosphide based devices. In addition, the methodology developed for this work can be applied to further doping studies in other systems.
Accurate determination of high pressure and temperature equations of state is fundamental in a variety of fields. However, it is often very difficult to cover a wide range of temperatures and pressures in an laboratory setting. Here we develop methods to determine a multi-phase equation of state for Ta through computation. The typical means of investigating thermodynamic properties is via ’classical’ molecular dynamics where the atomic motion is calculated from Newtonian mechanics with the electronic effects abstracted away into an interatomic potential function. For our purposes, a ’first principles’ approach such as DFT is useful as a classical potential is typically valid for only a portion of the phase diagram (i.e. whatever part it has been fit to). Furthermore, for extremes of temperature and pressure quantum effects become critical to accurately capture an equation of state and are very hard to capture in even complex model potentials. This requires extending the inherently zero temperature DFT to predict the finite temperature response of the system. Statistical modelling and thermodynamic integration is used to extend our results over all phases, as well as phase-coexistence regions which are at the limits of typical DFT validity. We deliver the most comprehensive and accurate equation of state that has been done for Ta. This work also lends insights that can be applied to further equation of state work in many other materials.