26 resultados para First principle
em CaltechTHESIS
Resumo:
Part I.
We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.
We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:
1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.
2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.
3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.
4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.
5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.
6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.
7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.
8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.
9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf/σ0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.
Part II.
Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.
Resumo:
The field of plasmonics exploits the unique optical properties of metallic nanostructures to concentrate and manipulate light at subwavelength length scales. Metallic nanostructures get their unique properties from their ability to support surface plasmons– coherent wave-like oscillations of the free electrons at the interface between a conductive and dielectric medium. Recent advancements in the ability to fabricate metallic nanostructures with subwavelength length scales have created new possibilities in technology and research in a broad range of applications.
In the first part of this thesis, we present two investigations of the relationship between the charge state and optical state of plasmonic metal nanoparticles. Using experimental bias-dependent extinction measurements, we derive a potential- dependent dielectric function for Au nanoparticles that accounts for changes in the physical properties due to an applied bias that contribute to the optical extinction. We also present theory and experiment for the reverse effect– the manipulation of the carrier density of Au nanoparticles via controlled optical excitation. This plasmoelectric effect takes advantage of the strong resonant properties of plasmonic materials and the relationship between charge state and optical properties to eluci- date a new avenue for conversion of optical power to electrical potential.
The second topic of this thesis is the non-radiative decay of plasmons to a hot-carrier distribution, and the distribution’s subsequent relaxation. We present first-principles calculations that capture all of the significant microscopic mechanisms underlying surface plasmon decay and predict the initial excited carrier distributions so generated. We also preform ab initio calculations of the electron-temperature dependent heat capacities and electron-phonon coupling coefficients of plasmonic metals. We extend these first-principle methods to calculate the electron-temperature dependent dielectric response of hot electrons in plasmonic metals, including direct interband and phonon-assisted intraband transitions. Finally, we combine these first-principles calculations of carrier dynamics and optical response to produce a complete theoretical description of ultrafast pump-probe measurements, free of any fitting parameters that are typical in previous analyses.
Resumo:
This thesis is mainly concerned with the application of groups of transformations to differential equations and in particular with the connection between the group structure of a given equation and the existence of exact solutions and conservation laws. In this respect the Lie-Bäcklund groups of tangent transformations, particular cases of which are the Lie tangent and the Lie point groups, are extensively used.
In Chapter I we first review the classical results of Lie, Bäcklund and Bianchi as well as the more recent ones due mainly to Ovsjannikov. We then concentrate on the Lie-Bäcklund groups (or more precisely on the corresponding Lie-Bäcklund operators), as introduced by Ibragimov and Anderson, and prove some lemmas about them which are useful for the following chapters. Finally we introduce the concept of a conditionally admissible operator (as opposed to an admissible one) and show how this can be used to generate exact solutions.
In Chapter II we establish the group nature of all separable solutions and conserved quantities in classical mechanics by analyzing the group structure of the Hamilton-Jacobi equation. It is shown that consideration of only Lie point groups is insufficient. For this purpose a special type of Lie-Bäcklund groups, those equivalent to Lie tangent groups, is used. It is also shown how these generalized groups induce Lie point groups on Hamilton's equations. The generalization of the above results to any first order equation, where the dependent variable does not appear explicitly, is obvious. In the second part of this chapter we investigate admissible operators (or equivalently constants of motion) of the Hamilton-Jacobi equation with polynornial dependence on the momenta. The form of the most general constant of motion linear, quadratic and cubic in the momenta is explicitly found. Emphasis is given to the quadratic case, where the particular case of a fixed (say zero) energy state is also considered; it is shown that in the latter case additional symmetries may appear. Finally, some potentials of physical interest admitting higher symmetries are considered. These include potentials due to two centers and limiting cases thereof. The most general two-center potential admitting a quadratic constant of motion is obtained, as well as the corresponding invariant. Also some new cubic invariants are found.
In Chapter III we first establish the group nature of all separable solutions of any linear, homogeneous equation. We then concentrate on the Schrodinger equation and look for an algorithm which generates a quantum invariant from a classical one. The problem of an isomorphism between functions in classical observables and quantum observables is studied concretely and constructively. For functions at most quadratic in the momenta an isomorphism is possible which agrees with Weyl' s transform and which takes invariants into invariants. It is not possible to extend the isomorphism indefinitely. The requirement that an invariant goes into an invariant may necessitate variants of Weyl' s transform. This is illustrated for the case of cubic invariants. Finally, the case of a specific value of energy is considered; in this case Weyl's transform does not yield an isomorphism even for the quadratic case. However, for this case a correspondence mapping a classical invariant to a quantum orie is explicitly found.
Chapters IV and V are concerned with the general group structure of evolution equations. In Chapter IV we establish a one to one correspondence between admissible Lie-Bäcklund operators of evolution equations (derivable from a variational principle) and conservation laws of these equations. This correspondence takes the form of a simple algorithm.
In Chapter V we first establish the group nature of all Bäcklund transformations (BT) by proving that any solution generated by a BT is invariant under the action of some conditionally admissible operator. We then use an algorithm based on invariance criteria to rederive many known BT and to derive some new ones. Finally, we propose a generalization of BT which, among other advantages, clarifies the connection between the wave-train solution and a BT in the sense that, a BT may be thought of as a variation of parameters of some. special case of the wave-train solution (usually the solitary wave one). Some open problems are indicated.
Most of the material of Chapters II and III is contained in [I], [II], [III] and [IV] and the first part of Chapter V in [V].
Resumo:
The question of finding variational principles for coupled systems of first order partial differential equations is considered. Using a potential representation for solutions of the first order system a higher order system is obtained. Existence of a variational principle follows if the original system can be transformed to a self-adjoint higher order system. Existence of variational principles for all linear wave equations with constant coefficients having real dispersion relations is established. The method of adjoining some of the equations of the original system to a suitable Lagrangian function by the method of Lagrange multipliers is used to construct new variational principles for a class of linear systems. The equations used as side conditions must satisfy highly-restrictive integrability conditions. In the more difficult nonlinear case the system of two equations in two independent variables can be analyzed completely. For systems determined by two conservation laws the side condition must be a conservation law in addition to satisfying the integrability conditions.
Resumo:
In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.
Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.
Resumo:
The nonlinear partial differential equations for dispersive waves have special solutions representing uniform wavetrains. An expansion procedure is developed for slowly varying wavetrains, in which full nonlinearity is retained but in which the scale of the nonuniformity introduces a small parameter. The first order results agree with the results that Whitham obtained by averaging methods. The perturbation method provides a detailed description and deeper understanding, as well as a consistent development to higher approximations. This method for treating partial differential equations is analogous to the "multiple time scale" methods for ordinary differential equations in nonlinear vibration theory. It may also be regarded as a generalization of geometrical optics to nonlinear problems.
To apply the expansion method to the classical water wave problem, it is crucial to find an appropriate variational principle. It was found in the present investigation that a Lagrangian function equal to the pressure yields the full set of equations of motion for the problem. After this result is derived, the Lagrangian is compared with the more usual expression formed from kinetic minus potential energy. The water wave problem is then examined by means of the expansion procedure.
Resumo:
This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.
A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.
Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.
This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.
Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.
Resumo:
We consider the following singularly perturbed linear two-point boundary-value problem:
Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)
By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)
Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.
A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.
Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).
Resumo:
The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.
Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.
The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.
The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.
In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.
Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.
The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.
The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.
Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.
Resumo:
Underlying matter and light are their building blocks of tiny atoms and photons. The ability to control and utilize matter-light interactions down to the elementary single atom and photon level at the nano-scale opens up exciting studies at the frontiers of science with applications in medicine, energy, and information technology. Of these, an intriguing front is the development of quantum networks where N >> 1 single-atom nodes are coherently linked by single photons, forming a collective quantum entity potentially capable of performing quantum computations and simulations. Here, a promising approach is to use optical cavities within the setting of cavity quantum electrodynamics (QED). However, since its first realization in 1992 by Kimble et al., current proof-of-principle experiments have involved just one or two conventional cavities. To move beyond to N >> 1 nodes, in this thesis we investigate a platform born from the marriage of cavity QED and nanophotonics, where single atoms at ~100 nm near the surfaces of lithographically fabricated dielectric photonic devices can strongly interact with single photons, on a chip. Particularly, we experimentally investigate three main types of devices: microtoroidal optical cavities, optical nanofibers, and nanophotonic crystal based structures. With a microtoroidal cavity, we realized a robust and efficient photon router where single photons are extracted from an incident coherent state of light and redirected to a separate output with high efficiency. We achieved strong single atom-photon coupling with atoms located ~100 nm near the surface of a microtoroid, which revealed important aspects in the atom dynamics and QED of these systems including atom-surface interaction effects. We present a method to achieve state-insensitive atom trapping near optical nanofibers, critical in nanophotonic systems where electromagnetic fields are tightly confined. We developed a system that fabricates high quality nanofibers with high controllability, with which we experimentally demonstrate a state-insensitive atom trap. We present initial investigations on nanophotonic crystal based structures as a platform for strong atom-photon interactions. The experimental advances and theoretical investigations carried out in this thesis provide a framework for and open the door to strong single atom-photon interactions using nanophotonics for chip-integrated quantum networks.
Resumo:
Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.
We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.
We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.
Resumo:
Adaptive optics (AO) corrects distortions created by atmospheric turbulence and delivers diffraction-limited images on ground-based telescopes. The vastly improved spatial resolution and sensitivity has been utilized for studying everything from the magnetic fields of sunspots upto the internal dynamics of high-redshift galaxies. This thesis about AO science from small and large telescopes is divided into two parts: Robo-AO and magnetar kinematics.
In the first part, I discuss the construction and performance of the world’s first fully autonomous visible light AO system, Robo-AO, at the Palomar 60-inch telescope. Robo-AO operates extremely efficiently with an overhead < 50s, typically observing about 22 targets every hour. We have performed large AO programs observing a total of over 7,500 targets since May 2012. In the visible band, the images have a Strehl ratio of about 10% and achieve a contrast of upto 6 magnitudes at a separation of 1′′. The full-width at half maximum achieved is 110–130 milli-arcsecond. I describe how Robo-AO is used to constrain the evolutionary models of low-mass pre-main-sequence stars by measuring resolved spectral energy distributions of stellar multiples in the visible band, more than doubling the current sample. I conclude this part with a discussion of possible future improvements to the Robo-AO system.
In the second part, I describe a study of magnetar kinematics using high-resolution near-infrared (NIR) AO imaging from the 10-meter Keck II telescope. Measuring the proper motions of five magnetars with a precision of upto 0.7 milli-arcsecond/yr, we have more than tripled the previously known sample of magnetar proper motions and proved that magnetar kinematics are equivalent to those of radio pulsars. We conclusively showed that SGR 1900+14 and SGR 1806-20 were ejected from the stellar clusters with which they were traditionally associated. The inferred kinematic ages of these two magnetars are 6±1.8 kyr and 650±300 yr respectively. These ages are a factor of three to four times greater than their respective characteristic ages. The calculated braking index is close to unity as compared to three for the vacuum dipole model and 2.5-2.8 as measured for young pulsars. I conclude this section by describing a search for NIR counterparts of new magnetars and a future promise of polarimetric investigation of a magnetars’ NIR emission mechanism.
Resumo:
Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.
In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.