25 resultados para quadratic polynomial
em CaltechTHESIS
Resumo:
We approach the problem of automatically modeling a mechanical system from data about its dynamics, using a method motivated by variational integrators. We write the discrete Lagrangian as a quadratic polynomial with varying coefficients, and then use the discrete Euler-Lagrange equations to numerically solve for the values of these coefficients near the data points. This method correctly modeled the Lagrangian of a simple harmonic oscillator and a simple pendulum, even with significant measurement noise added to the trajectories.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
This thesis is mainly concerned with the application of groups of transformations to differential equations and in particular with the connection between the group structure of a given equation and the existence of exact solutions and conservation laws. In this respect the Lie-Bäcklund groups of tangent transformations, particular cases of which are the Lie tangent and the Lie point groups, are extensively used.
In Chapter I we first review the classical results of Lie, Bäcklund and Bianchi as well as the more recent ones due mainly to Ovsjannikov. We then concentrate on the Lie-Bäcklund groups (or more precisely on the corresponding Lie-Bäcklund operators), as introduced by Ibragimov and Anderson, and prove some lemmas about them which are useful for the following chapters. Finally we introduce the concept of a conditionally admissible operator (as opposed to an admissible one) and show how this can be used to generate exact solutions.
In Chapter II we establish the group nature of all separable solutions and conserved quantities in classical mechanics by analyzing the group structure of the Hamilton-Jacobi equation. It is shown that consideration of only Lie point groups is insufficient. For this purpose a special type of Lie-Bäcklund groups, those equivalent to Lie tangent groups, is used. It is also shown how these generalized groups induce Lie point groups on Hamilton's equations. The generalization of the above results to any first order equation, where the dependent variable does not appear explicitly, is obvious. In the second part of this chapter we investigate admissible operators (or equivalently constants of motion) of the Hamilton-Jacobi equation with polynornial dependence on the momenta. The form of the most general constant of motion linear, quadratic and cubic in the momenta is explicitly found. Emphasis is given to the quadratic case, where the particular case of a fixed (say zero) energy state is also considered; it is shown that in the latter case additional symmetries may appear. Finally, some potentials of physical interest admitting higher symmetries are considered. These include potentials due to two centers and limiting cases thereof. The most general two-center potential admitting a quadratic constant of motion is obtained, as well as the corresponding invariant. Also some new cubic invariants are found.
In Chapter III we first establish the group nature of all separable solutions of any linear, homogeneous equation. We then concentrate on the Schrodinger equation and look for an algorithm which generates a quantum invariant from a classical one. The problem of an isomorphism between functions in classical observables and quantum observables is studied concretely and constructively. For functions at most quadratic in the momenta an isomorphism is possible which agrees with Weyl' s transform and which takes invariants into invariants. It is not possible to extend the isomorphism indefinitely. The requirement that an invariant goes into an invariant may necessitate variants of Weyl' s transform. This is illustrated for the case of cubic invariants. Finally, the case of a specific value of energy is considered; in this case Weyl's transform does not yield an isomorphism even for the quadratic case. However, for this case a correspondence mapping a classical invariant to a quantum orie is explicitly found.
Chapters IV and V are concerned with the general group structure of evolution equations. In Chapter IV we establish a one to one correspondence between admissible Lie-Bäcklund operators of evolution equations (derivable from a variational principle) and conservation laws of these equations. This correspondence takes the form of a simple algorithm.
In Chapter V we first establish the group nature of all Bäcklund transformations (BT) by proving that any solution generated by a BT is invariant under the action of some conditionally admissible operator. We then use an algorithm based on invariance criteria to rederive many known BT and to derive some new ones. Finally, we propose a generalization of BT which, among other advantages, clarifies the connection between the wave-train solution and a BT in the sense that, a BT may be thought of as a variation of parameters of some. special case of the wave-train solution (usually the solitary wave one). Some open problems are indicated.
Most of the material of Chapters II and III is contained in [I], [II], [III] and [IV] and the first part of Chapter V in [V].
Resumo:
Numerical approximations of nonunique solutions of the Navier-Stokes equations are obtained for steady viscous incompressible axisymmetric flow between two infinite rotating coaxial disks. For example, nineteen solutions have been found for the case when the disks are rotating with the same speed but in opposite direction. Bifurcation and perturbed bifurcation phenomena are observed. An efficient method is used to compute solution branches. The stability of solutions is analyzed. The rate of convergence of Newton's method at singular points is discussed. In particular, recovery of quadratic convergence at "normal limit points" and bifurcation points is indicated. Analytical construction of some of the computed solutions using singular perturbation techniques is discussed.
Resumo:
The determination of the energy levels and the probabilities of transition between them, by the formal analysis of observed electronic, vibrational, and rotational band structures, forms the direct goal of all investigations of molecular spectra, but the significance of such data lies in the possibility of relating them theoretically to more concrete properties of molecules and the radiation field. From the well developed electronic spectra of diatomic molecules, it has been possible, with the aid of the non-relativistic quantum mechanics, to obtain accurate moments of inertia, molecular potential functions, electronic structures, and detailed information concerning the coupling of spin and orbital angular monenta with the angular momentum of nuclear rotation. The silicon fluori1e molecule has been investigated in this laboratory, and is found to emit bands whose vibrational and rotational structures can be analyzed in this detailed fashion.
Like silicon fluoride, however, the great majority of diatomic molecules are formed only under the unusual conditions of electrical discharge, or in high temperature furnaces, so that although their spectra are of great theoretical interest, the chemist is eager to proceed to a study of polyatomic molecules, in the hope that their more practically interesting structures might also be determined with the accuracy and assurance which characterize the spectroscopic determinations of the constants of diatomic molecules. Some progress has been made in the determination of molecule potential functions from the vibrational term values deduced from Raman and infrared spectra, but in no case can the calculations be carried out with great generality, since the number of known term values is always small compared with the total number of potential constants in even so restricted a potential function as the simple quadratic type. For the determination of nuclear configurations and bond distances, however, a knowledge of the rotational terms is required. The spectra of about twelve of the simpler polyatomic molecules have been subjected to rotational analyses, and a number of bond distances are known with considerable accuracy, yet the number of molecules whose rotational fine structure has been resolved even with the most powerful instruments is small. Consequently, it was felt desirable to investigate the spectra of a number of other promising polyatomic molecules, with the purpose of carrying out complete rotational analyses of all resolvable bands, and ascertaining the value of the unresolved band envelopes in determining the structures of such molecules, in the cases in which resolution is no longer possible. Although many of the compounds investigated absorbed too feebly to be photographed under high dispersion with the present infrared sensitizations, the location and relative intensities of their bands, determined by low dispersion measurements, will be reported in the hope that these compounds may be reinvestigated in the future with improved techniques.
Resumo:
In this thesis we study Galois representations corresponding to abelian varieties with certain reduction conditions. We show that these conditions force the image of the representations to be "big," so that the Mumford-Tate conjecture (:= MT) holds. We also prove that the set of abelian varieties satisfying these conditions is dense in a corresponding moduli space.
The main results of the thesis are the following two theorems.
Theorem A: Let A be an absolutely simple abelian variety, End° (A) = k : imaginary quadratic field, g = dim(A). Assume either dim(A) ≤ 4, or A has bad reduction at some prime ϕ, with the dimension of the toric part of the reduction equal to 2r, and gcd(r,g) = 1, and (r,g) ≠ (15,56) or (m -1, m(m+1)/2). Then MT holds.
Theorem B: Let M be the moduli space of abelian varieties with fixed polarization, level structure and a k-action. It is defined over a number field F. The subset of M(Q) corresponding to absolutely simple abelian varieties with a prescribed stable reduction at a large enough prime ϕ of F is dense in M(C) in the complex topology. In particular, the set of simple abelian varieties having bad reductions with fixed dimension of the toric parts is dense.
Besides this we also established the following results:
(1) MT holds for some other classes of abelian varieties with similar reduction conditions. For example, if A is an abelian variety with End° (A) = Q and the dimension of the toric part of its reduction is prime to dim( A), then MT holds.
(2) MT holds for Ribet-type abelian varieties.
(3) The Hodge and the Tate conjectures are equivalent for abelian 4-folds.
(4) MT holds for abelian 4-folds of type II, III, IV (Theorem 5.0(2)) and some 4-folds of type I.
(5) For some abelian varieties either MT or the Hodge conjecture holds.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
This thesis addresses whether it is possible to build a robust memory device for quantum information. Many schemes for fault-tolerant quantum information processing have been developed so far, one of which, called topological quantum computation, makes use of degrees of freedom that are inherently insensitive to local errors. However, this scheme is not so reliable against thermal errors. Other fault-tolerant schemes achieve better reliability through active error correction, but incur a substantial overhead cost. Thus, it is of practical importance and theoretical interest to design and assess fault-tolerant schemes that work well at finite temperature without active error correction.
In this thesis, a three-dimensional gapped lattice spin model is found which demonstrates for the first time that a reliable quantum memory at finite temperature is possible, at least to some extent. When quantum information is encoded into a highly entangled ground state of this model and subjected to thermal errors, the errors remain easily correctable for a long time without any active intervention, because a macroscopic energy barrier keeps the errors well localized. As a result, stored quantum information can be retrieved faithfully for a memory time which grows exponentially with the square of the inverse temperature. In contrast, for previously known types of topological quantum storage in three or fewer spatial dimensions the memory time scales exponentially with the inverse temperature, rather than its square.
This spin model exhibits a previously unexpected topological quantum order, in which ground states are locally indistinguishable, pointlike excitations are immobile, and the immobility is not affected by small perturbations of the Hamiltonian. The degeneracy of the ground state, though also insensitive to perturbations, is a complicated number-theoretic function of the system size, and the system bifurcates into multiple noninteracting copies of itself under real-space renormalization group transformations. The degeneracy, the excitations, and the renormalization group flow can be analyzed using a framework that exploits the spin model's symmetry and some associated free resolutions of modules over polynomial algebras.
Resumo:
The field of cavity-optomechanics explores the interaction of light with sound in an ever increasing array of devices. This interaction allows the mechanical system to be both sensed and controlled by the optical system, opening up a wide variety of experiments including the cooling of the mechanical resonator to its quantum mechanical ground state and the squeezing of the optical field upon interaction with the mechanical resonator, to name two.
In this work we explore two very different systems with different types of optomechanical coupling. The first system consists of two microdisk optical resonators stacked on top of each other and separated by a very small slot. The interaction of the disks causes their optical resonance frequencies to be extremely sensitive to the gap between the disks. By careful control of the gap between the disks, the optomechanical coupling can be made to be quadratic to first order which is uncommon in optomechanical systems. With this quadratic coupling the light field is now sensitive to the energy of the mechanical resonator and can directly control the potential energy trapping the mechanical motion. This ability to directly control the spring constant without modifying the energy of the mechanical system, unlike in linear optomechanical coupling, is explored.
Next, the bulk of this thesis deals with a high mechanical frequency optomechanical crystal which is used to coherently convert photons between different frequencies. This is accomplished via the engineered linear optomechanical coupling in these devices. Both classical and quantum systems utilize the interaction of light and matter across a wide range of energies. These systems are often not naturally compatible with one another and require a means of converting photons of dissimilar wavelengths to combine and exploit their different strengths. Here we theoretically propose and experimentally demonstrate coherent wavelength conversion of optical photons using photon-phonon translation in a cavity-optomechanical system. For an engineered silicon optomechanical crystal nanocavity supporting a 4 GHz localized phonon mode, optical signals in a 1.5 MHz bandwidth are coherently converted over a 11.2 THz frequency span between one cavity mode at wavelength 1460 nm and a second cavity mode at 1545 nm with a 93% internal (2% external) peak efficiency. The thermal and quantum limiting noise involved in the conversion process is also analyzed and, in terms of an equivalent photon number signal level, are found to correspond to an internal noise level of only 6 and 4 times 10x^-3 quanta, respectively.
We begin by developing the requisite theoretical background to describe the system. A significant amount of time is then spent describing the fabrication of these silicon nanobeams, with an emphasis on understanding the specifics and motivation. The experimental demonstration of wavelength conversion is then described and analyzed. It is determined that the method of getting photons into the cavity and collected from the cavity is a fundamental limiting factor in the overall efficiency. Finally, a new coupling scheme is designed, fabricated, and tested that provides a means of coupling greater than 90% of photons into and out of the cavity, addressing one of the largest obstacles with the initial wavelength conversion experiment.
Resumo:
A classical question in combinatorics is the following: given a partial Latin square $P$, when can we complete $P$ to a Latin square $L$? In this paper, we investigate the class of textbf{$epsilon$-dense partial Latin squares}: partial Latin squares in which each symbol, row, and column contains no more than $epsilon n$-many nonblank cells. Based on a conjecture of Nash-Williams, Daykin and H"aggkvist conjectured that all $frac{1}{4}$-dense partial Latin squares are completable. In this paper, we will discuss the proof methods and results used in previous attempts to resolve this conjecture, introduce a novel technique derived from a paper by Jacobson and Matthews on generating random Latin squares, and use this novel technique to study $ epsilon$-dense partial Latin squares that contain no more than $delta n^2$ filled cells in total.
In Chapter 2, we construct completions for all $ epsilon$-dense partial Latin squares containing no more than $delta n^2$ filled cells in total, given that $epsilon < frac{1}{12}, delta < frac{ left(1-12epsilonright)^{2}}{10409}$. In particular, we show that all $9.8 cdot 10^{-5}$-dense partial Latin squares are completable. In Chapter 4, we augment these results by roughly a factor of two using some probabilistic techniques. These results improve prior work by Gustavsson, which required $epsilon = delta leq 10^{-7}$, as well as Chetwynd and H"aggkvist, which required $epsilon = delta = 10^{-5}$, $n$ even and greater than $10^7$.
If we omit the probabilistic techniques noted above, we further show that such completions can always be found in polynomial time. This contrasts a result of Colbourn, which states that completing arbitrary partial Latin squares is an NP-complete task. In Chapter 3, we strengthen Colbourn's result to the claim that completing an arbitrary $left(frac{1}{2} + epsilonright)$-dense partial Latin square is NP-complete, for any $epsilon > 0$.
Colbourn's result hinges heavily on a connection between triangulations of tripartite graphs and Latin squares. Motivated by this, we use our results on Latin squares to prove that any tripartite graph $G = (V_1, V_2, V_3)$ such that begin{itemize} item $|V_1| = |V_2| = |V_3| = n$, item For every vertex $v in V_i$, $deg_+(v) = deg_-(v) geq (1- epsilon)n,$ and item $|E(G)| > (1 - delta)cdot 3n^2$ end{itemize} admits a triangulation, if $epsilon < frac{1}{132}$, $delta < frac{(1 -132epsilon)^2 }{83272}$. In particular, this holds when $epsilon = delta=1.197 cdot 10^{-5}$.
This strengthens results of Gustavsson, which requires $epsilon = delta = 10^{-7}$.
In an unrelated vein, Chapter 6 explores the class of textbf{quasirandom graphs}, a notion first introduced by Chung, Graham and Wilson cite{chung1989quasi} in 1989. Roughly speaking, a sequence of graphs is called "quasirandom"' if it has a number of properties possessed by the random graph, all of which turn out to be equivalent. In this chapter, we study possible extensions of these results to random $k$-edge colorings, and create an analogue of Chung, Graham and Wilson's result for such colorings.
Resumo:
Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.
This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.
When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.
Resumo:
The energy loss of protons and deuterons in D_2O ice has been measured over the energy range, E_p 18 - 541 kev. The double focusing magnetic spectrometer was used to measure the energy of the particles after they had traversed a known thickness of the ice target. One method of measurement is used to determine relative values of the stopping cross section as a function of energy; another method measures absolute values. The results are in very good agreement with the values calculated from Bethe’s semi-empirical formula. Possible sources of error are considered and the accuracy of the measurements is estimated to be ± 4%.
The D(dp)H^3 cross section has been measured by two methods. For E_D = 200 - 500 kev the spectrometer was used to obtain the momentum spectrum of the protons and tritons. From the yield and stopping cross section the reaction cross section at 90° has been obtained.
For E_D = 35 – 550 kev the proton yield from a thick target was differentiated to obtain the cross section. Both thin and thick target methods were used to measure the yield at each of ten angles. The angular distribution is expressed in terms of a Legendre polynomial expansion. The various sources of experimental error are considered in detail, and the probable error of the cross section measurements is estimated to be ± 5%.
Resumo:
We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.