9 resultados para New Order
em CaltechTHESIS
Resumo:
This thesis presents a new approach for the numerical solution of three-dimensional problems in elastodynamics. The new methodology, which is based on a recently introduced Fourier continuation (FC) algorithm for the solution of Partial Differential Equations on the basis of accurate Fourier expansions of possibly non-periodic functions, enables fast, high-order solutions of the time-dependent elastic wave equation in a nearly dispersionless manner, and it requires use of CFL constraints that scale only linearly with spatial discretizations. A new FC operator is introduced to treat Neumann and traction boundary conditions, and a block-decomposed (sub-patch) overset strategy is presented for implementation of general, complex geometries in distributed-memory parallel computing environments. Our treatment of the elastic wave equation, which is formulated as a complex system of variable-coefficient PDEs that includes possibly heterogeneous and spatially varying material constants, represents the first fully-realized three-dimensional extension of FC-based solvers to date. Challenges for three-dimensional elastodynamics simulations such as treatment of corners and edges in three-dimensional geometries, the existence of variable coefficients arising from physical configurations and/or use of curvilinear coordinate systems and treatment of boundary conditions, are all addressed. The broad applicability of our new FC elasticity solver is demonstrated through application to realistic problems concerning seismic wave motion on three-dimensional topographies as well as applications to non-destructive evaluation where, for the first time, we present three-dimensional simulations for comparison to experimental studies of guided-wave scattering by through-thickness holes in thin plates.
Resumo:
The question of finding variational principles for coupled systems of first order partial differential equations is considered. Using a potential representation for solutions of the first order system a higher order system is obtained. Existence of a variational principle follows if the original system can be transformed to a self-adjoint higher order system. Existence of variational principles for all linear wave equations with constant coefficients having real dispersion relations is established. The method of adjoining some of the equations of the original system to a suitable Lagrangian function by the method of Lagrange multipliers is used to construct new variational principles for a class of linear systems. The equations used as side conditions must satisfy highly-restrictive integrability conditions. In the more difficult nonlinear case the system of two equations in two independent variables can be analyzed completely. For systems determined by two conservation laws the side condition must be a conservation law in addition to satisfying the integrability conditions.
Resumo:
Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.
In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.
We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.
We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.
In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.
Finally, we discuss prospects for related future investigations.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.
For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.
To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.
The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.
The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.
Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.
We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.
Resumo:
This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.
Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.
In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.
In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.
The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.
In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.
In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
Let {Ƶn}∞n = -∞ be a stochastic process with state space S1 = {0, 1, …, D – 1}. Such a process is called a chain of infinite order. The transitions of the chain are described by the functions
Qi(i(0)) = Ƥ(Ƶn = i | Ƶn - 1 = i (0)1, Ƶn - 2 = i (0)2, …) (i ɛ S1), where i(0) = (i(0)1, i(0)2, …) ranges over infinite sequences from S1. If i(n) = (i(n)1, i(n)2, …) for n = 1, 2,…, then i(n) → i(0) means that for each k, i(n)k = i(0)k for all n sufficiently large.
Given functions Qi(i(0)) such that
(i) 0 ≤ Qi(i(0) ≤ ξ ˂ 1
(ii)D – 1/Ʃ/i = 0 Qi(i(0)) Ξ 1
(iii) Qi(i(n)) → Qi(i(0)) whenever i(n) → i(0),
we prove the existence of a stationary chain of infinite order {Ƶn} whose transitions are given by
Ƥ (Ƶn = i | Ƶn - 1, Ƶn - 2, …) = Qi(Ƶn - 1, Ƶn - 2, …)
With probability 1. The method also yields stationary chains {Ƶn} for which (iii) does not hold but whose transition probabilities are, in a sense, “locally Markovian.” These and similar results extend a paper by T.E. Harris [Pac. J. Math., 5 (1955), 707-724].
Included is a new proof of the existence and uniqueness of a stationary absolute distribution for an Nth order Markov chain in which all transitions are possible. This proof allows us to achieve our main results without the use of limit theorem techniques.
Solar flare particle propagation--comparison of a new analytic solution with spacecraft measurements
Resumo:
A new analytic solution has been obtained to the complete Fokker-Planck equation for solar flare particle propagation including the effects of convection, energy-change, corotation, and diffusion with ĸr = constant and ĸƟ ∝ r2. It is assumed that the particles are injected impulsively at a single point in space, and that a boundary exists beyond which the particles are free to escape. Several solar flare particle events have been observed with the Caltech Solar and Galactic Cosmic Ray Experiment aboard OGO-6. Detailed comparisons of the predictions of the new solution with these observations of 1-70 MeV protons show that the model adequately describes both the rise and decay times, indicating that ĸr = constant is a better description of conditions inside 1 AU than is ĸr ∝ r. With an outer boundary at 2.7 AU, a solar wind velocity of 400 km/sec, and a radial diffusion coefficient ĸr ≈ 2-8 x 1020 cm2/sec, the model gives reasonable fits to the time-profile of 1-10 MeV protons from "classical" flare-associated events. It is not necessary to invoke a scatter-free region near the sun in order to reproduce the fast rise times observed for directly-connected events. The new solution also yields a time-evolution for the vector anisotropy which agrees well with previously reported observations.
In addition, the new solution predicts that, during the decay phase, a typical convex spectral feature initially at energy To will move to lower energies at an exponential rate given by TKINK = Toexp(-t/ƬKINK). Assuming adiabatic deceleration and a boundary at 2.7 AU, the solution yields ƬKINK ≈ 100h, which is faster than the measured ~200h time constant and slower than the adiabatic rate of ~78h at 1 AU. Two possible explanations are that the boundary is at ~5 AU or that some other energy-change process is operative.