15 resultados para Fictional space
em CaltechTHESIS
Resumo:
The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.
Resumo:
This thesis presents a concept for ultra-lightweight deformable mirrors based on a thin substrate of optical surface quality coated with continuous active piezopolymer layers that provide modes of actuation and shape correction. This concept eliminates any kind of stiff backing structure for the mirror surface and exploits micro-fabrication technologies to provide a tight integration of the active materials into the mirror structure, to avoid actuator print-through effects. Proof-of-concept, 10-cm-diameter mirrors with a low areal density of about 0.5 kg/m² have been designed, built and tested to measure their shape-correction performance and verify the models used for design. The low cost manufacturing scheme uses replication techniques, and strives for minimizing residual stresses that deviate the optical figure from the master mandrel. It does not require precision tolerancing, is lightweight, and is therefore potentially scalable to larger diameters for use in large, modular space telescopes. Other potential applications for such a laminate could include ground-based mirrors for solar energy collection, adaptive optics for atmospheric turbulence, laser communications, and other shape control applications.
The immediate application for these mirrors is for the Autonomous Assembly and Reconfiguration of a Space Telescope (AAReST) mission, which is a university mission under development by Caltech, the University of Surrey, and JPL. The design concept, fabrication methodology, material behaviors and measurements, mirror modeling, mounting and control electronics design, shape control experiments, predictive performance analysis, and remaining challenges are presented herein. The experiments have validated numerical models of the mirror, and the mirror models have been used within a model of the telescope in order to predict the optical performance. A demonstration of this mirror concept, along with other new telescope technologies, is planned to take place during the AAReST mission.
Resumo:
The concept of a "projection function" in a finite-dimensional real or complex normed linear space H (the function PM which carries every element into the closest element of a given subspace M) is set forth and examined.
If dim M = dim H - 1, then PM is linear. If PN is linear for all k-dimensional subspaces N, where 1 ≤ k < dim M, then PM is linear.
The projective bound Q, defined to be the supremum of the operator norm of PM for all subspaces, is in the range 1 ≤ Q < 2, and these limits are the best possible. For norms with Q = 1, PM is always linear, and a characterization of those norms is given.
If H also has an inner product (defined independently of the norm), so that a dual norm can be defined, then when PM is linear its adjoint PMH is the projection on (kernel PM)⊥ by the dual norm. The projective bounds of a norm and its dual are equal.
The notion of a pseudo-inverse F+ of a linear transformation F is extended to non-Euclidean norms. The distance from F to the set of linear transformations G of lower rank (in the sense of the operator norm ∥F - G∥) is c/∥F+∥, where c = 1 if the range of F fills its space, and 1 ≤ c < Q otherwise. The norms on both domain and range spaces have Q = 1 if and only if (F+)+ = F for every F. This condition is also sufficient to prove that we have (F+)H = (FH)+, where the latter pseudo-inverse is taken using dual norms.
In all results, the real and complex cases are handled in a completely parallel fashion.
Resumo:
The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
DC and transient measurements of space-charge-limited currents through alloyed and symmetrical n^+ν n^+ structures made of nominally 75 kΩcm ν-type silicon are studied before and after the introduction of defects by 14 MeV neutron radiation. In the transient measurements, the current response to a large turn-on voltage step is analyzed. Right after the voltage step is applied, the current transient reaches a value which we shall call "initial current" value. At longer times, the transient current decays from the initial current value if traps are present.
Before the irradiation, the initial current density-voltage characteristics J(V) agree quantitatively with the theory of trap-free space-charge-limited current in solids. We obtain for the electron mobility a temperature dependence which indicates that scattering due to impurities is weak. This is expected for the high purity silicon used. The drift velocity-field relationships for electrons at room temperature and 77°K, derived from the initial current density-voltage characteristics, are shown to fit the relationships obtained with other methods by other workers. The transient current response for t > 0 remains practically constant at the initial value, thus indicating negligible trapping.
Measurement of the initial (trap-free) current density-voltage characteristics after the irradiation indicates that the drift velocity-field relationship of electrons in silicon is affected by the radiation only at low temperature in the low field range. The effect is not sufficiently pronounced to be readily analyzed and no formal description of it is offered. In the transient response after irradiation for t > 0, the current decays from its initial value, thus revealing the presence of traps. To study these traps, in addition to transient measurements, the DC current characteristics were measured and shown to follow the theory of trap-dominated space-charge-limited current in solids. This theory was applied to a model consisting of two discrete levels in the forbidden band gap. Calculations and experiments agreed and the capture cross-sections of the trapping levels were obtained. This is the first experimental case known to us through which the flow of space-charge-limited current is so simply representable.
These results demonstrate the sensitivity of space-charge-limited current flow as a tool to detect traps and changes in the drift velocity-field relationship of carriers caused by radiation. They also establish that devices based on the mode of space-charge-limited current flow will be affected considerably by any type of radiation capable of introducing traps. This point has generally been overlooked so far, but is obviously quite significant.
Resumo:
Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.
Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.
It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.
A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.
Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
An exciting frontier in quantum information science is the integration of otherwise "simple'' quantum elements into complex quantum networks. The laboratory realization of even small quantum networks enables the exploration of physical systems that have not heretofore existed in the natural world. Within this context, there is active research to achieve nanoscale quantum optical circuits, for which atoms are trapped near nano-scopic dielectric structures and "wired'' together by photons propagating through the circuit elements. Single atoms and atomic ensembles endow quantum functionality for otherwise linear optical circuits and thereby enable the capability of building quantum networks component by component. Toward these goals, we have experimentally investigated three different systems, from conventional to rather exotic systems : free-space atomic ensembles, optical nano fibers, and photonics crystal waveguides. First, we demonstrate measurement-induced quadripartite entanglement among four quantum memories. Next, following the landmark realization of a nanofiber trap, we demonstrate the implementation of a state-insensitive, compensated nanofiber trap. Finally, we reach more exotic systems based on photonics crystal devices. Beyond conventional topologies of resonators and waveguides, new opportunities emerge from the powerful capabilities of dispersion and modal engineering in photonic crystal waveguides. We have implemented an integrated optical circuit with a photonics crystal waveguide capable of both trapping and interfacing atoms with guided photons, and have observed the collective effect, superradiance, mediated by the guided photons. These advances provide an important capability for engineered light-matter interactions, enabling explorations of novel quantum transport and quantum many-body phenomena.
Resumo:
We will prove that, for a 2 or 3 component L-space link, HFL- is completely determined by the multi-variable Alexander polynomial of all the sub-links of L, as well as the pairwise linking numbers of all the components of L. We will also give some restrictions on the multi-variable Alexander polynomial of an L-space link. Finally, we use the methods in this paper to prove a conjecture of Yajing Liu classifying all 2-bridge L-space links.
Resumo:
The propagation of cosmic rays through interstellar space has been investigated with the view of determining what particles can traverse astronomical distances without serious loss of energy. The principal method of loss of energy of high energy particles is by interaction with radiation. It is found that high energy (1013-1018ev) electrons drop to one-tenth their energy in 108 light years in the radiation density in the galaxy and that protons are not significantly affected in this distance. The origin of the cosmic rays is not known so that various hypotheses as to their origin are examined. If the source is near a star it is found that the interaction of electrons and photons with the stellar radiation field and the interaction of electrons with the stellar magnetic field limit the amount of energy which these particles can carry away from the star. However, the interaction is not strong enough to affect the energy of protons or light nuclei appreciably. The chief uncertainty in the results is due to the possible existence of general galactic magnetic field. The main conclusion reached is that if there is a general galactic magnetic field, then the primary spectrum has very few photons, only low energy (˂ 1013 ev) electrons and the higher energy particles are primarily protons regardless of the source mechanism, and if there is no general galactic magnetic field, then the source of cosmic rays accelerates mainly protons and the present rate of production is much less than that in the past.
Resumo:
In this thesis an extensive study is made of the set P of all paranormal operators in B(H), the set of all bounded endomorphisms on the complex Hilbert space H. T ϵ B(H) is paranormal if for each z contained in the resolvent set of T, d(z, σ(T))//(T-zI)-1 = 1 where d(z, σ(T)) is the distance from z to σ(T), the spectrum of T. P contains the set N of normal operators and P contains the set of hyponormal operators. However, P is contained in L, the set of all T ϵ B(H) such that the convex hull of the spectrum of T is equal to the closure of the numerical range of T. Thus, N≤P≤L.
If the uniform operator (norm) topology is placed on B(H), then the relative topological properties of N, P, L can be discussed. In Section IV, it is shown that: 1) N P and L are arc-wise connected and closed, 2) N, P, and L are nowhere dense subsets of B(H) when dim H ≥ 2, 3) N = P when dimH ˂ ∞ , 4) N is a nowhere dense subset of P when dimH ˂ ∞ , 5) P is not a nowhere dense subset of L when dimH ˂ ∞ , and 6) it is not known if P is a nowhere dense subset of L when dimH ˂ ∞.
The spectral properties of paranormal operators are of current interest in the literature. Putnam [22, 23] has shown that certain points on the boundary of the spectrum of a paranormal operator are either normal eigenvalues or normal approximate eigenvalues. Stampfli [26] has shown that a hyponormal operator with countable spectrum is normal. However, in Theorem 3.3, it is shown that a paranormal operator T with countable spectrum can be written as the direct sum, N ⊕ A, of a normal operator N with σ(N) = σ(T) and of an operator A with σ(A) a subset of the derived set of σ(T). It is then shown that A need not be normal. If we restrict the countable spectrum of T ϵ P to lie on a C2-smooth rectifiable Jordan curve Go, then T must be normal [see Theorem 3.5 and its Corollary]. If T is a scalar paranormal operator with countable spectrum, then in order to conclude that T is normal the condition of σ(T) ≤ Go can be relaxed [see Theorem 3.6]. In Theorem 3.7 it is then shown that the above result is not true when T is not assumed to be scalar. It was then conjectured that if T ϵ P with σ(T) ≤ Go, then T is normal. The proof of Theorem 3.5 relies heavily on the assumption that T has countable spectrum and cannot be generalized. However, the corollary to Theorem 3.9 states that if T ϵ P with σ(T) ≤ Go, then T has a non-trivial lattice of invariant subspaces. After the completion of most of the work on this thesis, Stampfli [30, 31] published a proof that a paranormal operator T with σ(T) ≤ Go is normal. His proof uses some rather deep results concerning numerical ranges whereas the proof of Theorem 3.5 uses relatively elementary methods.
Resumo:
Let L be the algebra of all linear transformations on an n-dimensional vector space V over a field F and let A, B, ƐL. Let Ai+1 = AiB - BAi, i = 0, 1, 2,…, with A = Ao. Let fk (A, B; σ) = A2K+1 - σ1A2K-1 + σ2A2K-3 -… +(-1)KσKA1 where σ = (σ1, σ2,…, σK), σi belong to F and K = k(k-1)/2. Taussky and Wielandt [Proc. Amer. Math. Soc., 13(1962), 732-735] showed that fn(A, B; σ) = 0 if σi is the ith elementary symmetric function of (β4- βs)2, 1 ≤ r ˂ s ≤ n, i = 1, 2, …, N, with N = n(n-1)/2, where β4 are the characteristic roots of B. In this thesis we discuss relations involving fk(X, Y; σ) where X, Y Ɛ L and 1 ≤ k ˂ n. We show: 1. If F is infinite and if for each X Ɛ L there exists σ so that fk(A, X; σ) = 0 where 1 ≤ k ˂ n, then A is a scalar transformation. 2. If F is algebraically closed, a necessary and sufficient condition that there exists a basis of V with respect to which the matrices of A and B are both in block upper triangular form, where the blocks on the diagonals are either one- or two-dimensional, is that certain products X1, X2…Xr belong to the radical of the algebra generated by A and B over F, where Xi has the form f2(A, P(A,B); σ), for all polynomials P(x, y). We partially generalize this to the case where the blocks have dimensions ≤ k. 3. If A and B generate L, if the characteristic of F does not divide n and if there exists σ so that fk(A, B; σ) = 0, for some k with 1 ≤ k ˂ n, then the characteristic roots of B belong to the splitting field of gk(w; σ) = w2K+1 - σ1w2K-1 + σ2w2K-3 - …. +(-1)K σKw over F. We use this result to prove a theorem involving a generalized form of property L [cf. Motzkin and Taussky, Trans. Amer. Math. Soc., 73(1952), 108-114]. 4. Also we give mild generalizations of results of McCoy [Amer. Math. Soc. Bull., 42(1936), 592-600] and Drazin [Proc. London Math. Soc., 1(1951), 222-231].
Resumo:
Let M be an Abelian W*-algebra of operators on a Hilbert space H. Let M0 be the set of all linear, closed, densely defined transformations in H which commute with every unitary operator in the commutant M’ of M. A well known result of R. Pallu de Barriere states that if ɸ is a normal positive linear functional on M, then ɸ is of the form T → (Tx, x) for some x in H, where T is in M. An elementary proof of this result is given, using only those properties which are consequences of the fact that ReM is a Dedekind complete Riesz space with plenty of normal integrals. The techniques used lead to a natural construction of the class M0, and an elementary proof is given of the fact that a positive self-adjoint transformation in M0 has a unique positive square root in M0. It is then shown that when the algebraic operations are suitably defined, then M0 becomes a commutative algebra. If ReM0 denotes the set of all self-adjoint elements of M0, then it is proved that ReM0 is Dedekind complete, universally complete Riesz spaces which contains ReM as an order dense ideal. A generalization of the result of R. Pallu de la Barriere is obtained for the Riesz space ReM0 which characterizes the normal integrals on the order dense ideals of ReM0. It is then shown that ReM0 may be identified with the extended order dual of ReM, and that ReM0 is perfect in the extended sense.
Some secondary questions related to the Riesz space ReM are also studied. In particular it is shown that ReM is a perfect Riesz space, and that every integral is normal under the assumption that every decomposition of the identity operator has non-measurable cardinal. The presence of atoms in ReM is examined briefly, and it is shown that ReM is finite dimensional if and only if every order bounded linear functional on ReM is a normal integral.
Resumo:
In a paper published in 1961, L. Cesari [1] introduces a method which extends certain earlier existence theorems of Cesari and Hale ([2] to [6]) for perturbation problems to strictly nonlinear problems. Various authors ([1], [7] to [15]) have now applied this method to nonlinear ordinary and partial differential equations. The basic idea of the method is to use the contraction principle to reduce an infinite-dimensional fixed point problem to a finite-dimensional problem which may be attacked using the methods of fixed point indexes.
The following is my formulation of the Cesari fixed point method:
Let B be a Banach space and let S be a finite-dimensional linear subspace of B. Let P be a projection of B onto S and suppose Г≤B such that pГ is compact and such that for every x in PГ, P-1x∩Г is closed. Let W be a continuous mapping from Г into B. The Cesari method gives sufficient conditions for the existence of a fixed point of W in Г.
Let I denote the identity mapping in B. Clearly y = Wy for some y in Г if and only if both of the following conditions hold:
(i) Py = PWy.
(ii) y = (P + (I - P)W)y.
Definition. The Cesari fixed paint method applies to (Г, W, P) if and only if the following three conditions are satisfied:
(1) For each x in PГ, P + (I - P)W is a contraction from P-1x∩Г into itself. Let y(x) be that element (uniqueness follows from the contraction principle) of P-1x∩Г which satisfies the equation y(x) = Py(x) + (I-P)Wy(x).
(2) The function y just defined is continuous from PГ into B.
(3) There are no fixed points of PWy on the boundary of PГ, so that the (finite- dimensional) fixed point index i(PWy, int PГ) is defined.
Definition. If the Cesari fixed point method applies to (Г, W, P) then define i(Г, W, P) to be the index i(PWy, int PГ).
The three theorems of this thesis can now be easily stated.
Theorem 1 (Cesari). If i(Г, W, P) is defined and i(Г, W, P) ≠0, then there is a fixed point of W in Г.
Theorem 2. Let the Cesari fixed point method apply to both (Г, W, P1) and (Г, W, P2). Assume that P2P1=P1P2=P1 and assume that either of the following two conditions holds:
(1) For every b in B and every z in the range of P2, we have that ‖b=P2b‖ ≤ ‖b-z‖
(2)P2Г is convex.
Then i(Г, W, P1) = i(Г, W, P2).
Theorem 3. If Ω is a bounded open set and W is a compact operator defined on Ω so that the (infinite-dimensional) Leray-Schauder index iLS(W, Ω) is defined, and if the Cesari fixed point method applies to (Ω, W, P), then i(Ω, W, P) = iLS(W, Ω).
Theorems 2 and 3 are proved using mainly a homotopy theorem and a reduction theorem for the finite-dimensional and the Leray-Schauder indexes. These and other properties of indexes will be listed before the theorem in which they are used.