12 resultados para First Love

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract to Part I

The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.

Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.

No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.

Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.

Abstract to Part II

Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adaptive optics (AO) corrects distortions created by atmospheric turbulence and delivers diffraction-limited images on ground-based telescopes. The vastly improved spatial resolution and sensitivity has been utilized for studying everything from the magnetic fields of sunspots upto the internal dynamics of high-redshift galaxies. This thesis about AO science from small and large telescopes is divided into two parts: Robo-AO and magnetar kinematics.

In the first part, I discuss the construction and performance of the world’s first fully autonomous visible light AO system, Robo-AO, at the Palomar 60-inch telescope. Robo-AO operates extremely efficiently with an overhead < 50s, typically observing about 22 targets every hour. We have performed large AO programs observing a total of over 7,500 targets since May 2012. In the visible band, the images have a Strehl ratio of about 10% and achieve a contrast of upto 6 magnitudes at a separation of 1′′. The full-width at half maximum achieved is 110–130 milli-arcsecond. I describe how Robo-AO is used to constrain the evolutionary models of low-mass pre-main-sequence stars by measuring resolved spectral energy distributions of stellar multiples in the visible band, more than doubling the current sample. I conclude this part with a discussion of possible future improvements to the Robo-AO system.

In the second part, I describe a study of magnetar kinematics using high-resolution near-infrared (NIR) AO imaging from the 10-meter Keck II telescope. Measuring the proper motions of five magnetars with a precision of upto 0.7 milli-arcsecond/yr, we have more than tripled the previously known sample of magnetar proper motions and proved that magnetar kinematics are equivalent to those of radio pulsars. We conclusively showed that SGR 1900+14 and SGR 1806-20 were ejected from the stellar clusters with which they were traditionally associated. The inferred kinematic ages of these two magnetars are 6±1.8 kyr and 650±300 yr respectively. These ages are a factor of three to four times greater than their respective characteristic ages. The calculated braking index is close to unity as compared to three for the vacuum dipole model and 2.5-2.8 as measured for young pulsars. I conclude this section by describing a search for NIR counterparts of new magnetars and a future promise of polarimetric investigation of a magnetars’ NIR emission mechanism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.

The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.

Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.

Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.

A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.

The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.

Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The attitude of the medieval church towards violence before the First Crusade in 1095 underwent a significant institutional evolution, from the peaceful tradition of the New Testament and the Roman persecution, through the prelate-led military campaigns of the Carolingian period and the Peace of God era. It would be superficially easy to characterize this transformation as the pragmatic and entirely secular response of a growing power to the changing world. However, such a simplification does not fully do justice to the underlying theology. While church leaders from the 5th Century to the 11th had vastly different motivations and circumstances under which to develop their responses to a variety of violent activities, the teachings of Augustine of Hippo provided a unifying theme. Augustine’s just war theology, in establishing which conflicts are acceptable in the eyes of God, focused on determining whether a proper causa belli or basis for war exists, and then whether a legitimate authority declares and leads the war. Augustine masterfully integrated aspects of the Old and New Testaments to create a lasting and compelling case for his definition of justified violence. Although at different times and places his theology has been used to support a variety of different attitudes, the profound influence of his work on the medieval church’s evolving position on violence is clear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic structures and dynamics are the key to linking the material composition and structure to functionality and performance.

An essential issue in developing semiconductor devices for photovoltaics is to design materials with optimal band gaps and relative positioning of band levels. Approximate DFT methods have been justified to predict band gaps from KS/GKS eigenvalues, but the accuracy is decisively dependent on the choice of XC functionals. We show here for CuInSe2 and CuGaSe2, the parent compounds of the promising CIGS solar cells, conventional LDA and GGA obtain gaps of 0.0-0.01 and 0.02-0.24 eV (versus experimental values of 1.04 and 1.67 eV), while the historically first global hybrid functional, B3PW91, is surprisingly the best, with band gaps of 1.07 and 1.58 eV. Furthermore, we show that for 27 related binary and ternary semiconductors, B3PW91 predicts gaps with a MAD of only 0.09 eV, which is substantially better than all modern hybrid functionals, including B3LYP (MAD of 0.19 eV) and screened hybrid functional HSE06 (MAD of 0.18 eV).

The laboratory performance of CIGS solar cells (> 20% efficiency) makes them promising candidate photovoltaic devices. However, there remains little understanding of how defects at the CIGS/CdS interface affect the band offsets and interfacial energies, and hence the performance of manufactured devices. To determine these relationships, we use the B3PW91 hybrid functional of DFT with the AEP method that we validate to provide very accurate descriptions of both band gaps and band offsets. This confirms the weak dependence of band offsets on surface orientation observed experimentally. We predict that the CBO of perfect CuInSe2/CdS interface is large, 0.79 eV, which would dramatically degrade performance. Moreover we show that band gap widening induced by Ga adjusts only the VBO, and we find that Cd impurities do not significantly affect the CBO. Thus we show that Cu vacancies at the interface play the key role in enabling the tunability of CBO. We predict that Na further improves the CBO through electrostatically elevating the valence levels to decrease the CBO, explaining the observed essential role of Na for high performance. Moreover we find that K leads to a dramatic decrease in the CBO to 0.05 eV, much better than Na. We suggest that the efficiency of CIGS devices might be improved substantially by tuning the ratio of Na to K, with the improved phase stability of Na balancing phase instability from K. All these defects reduce interfacial stability slightly, but not significantly.

A number of exotic structures have been formed through high pressure chemistry, but applications have been hindered by difficulties in recovering the high pressure phase to ambient conditions (i.e., one atmosphere and room temperature). Here we use dispersion-corrected DFT (PBE-ulg flavor) to predict that above 60 GPa the most stable form of N2O (the laughing gas in its molecular form) is a 1D polymer with an all-nitrogen backbone analogous to cis-polyacetylene in which alternate N are bonded (ionic covalent) to O. The analogous trans-polymer is only 0.03-0.10 eV/molecular unit less stable. Upon relaxation to ambient conditions both polymers relax below 14 GPa to the same stable non-planar trans-polymer, accompanied by possible electronic structure transitions. The predicted phonon spectrum and dissociation kinetics validate the stability of this trans-poly-NNO at ambient conditions, which has potential applications as a new type of conducting polymer with all-nitrogen chains and as a high-energy oxidizer for rocket propulsion. This work illustrates in silico materials discovery particularly in the realm of extreme conditions.

Modeling non-adiabatic electron dynamics has been a long-standing challenge for computational chemistry and materials science, and the eFF method presents a cost-efficient alternative. However, due to the deficiency of FSG representation, eFF is limited to low-Z elements with electrons of predominant s-character. To overcome this, we introduce a formal set of ECP extensions that enable accurate description of p-block elements. The extensions consist of a model representing the core electrons with the nucleus as a single pseudo particle represented by FSG, interacting with valence electrons through ECPs. We demonstrate and validate the ECP extensions for complex bonding structures, geometries, and energetics of systems with p-block character (C, O, Al, Si) and apply them to study materials under extreme mechanical loading conditions.

Despite its success, the eFF framework has some limitations, originated from both the design of Pauli potentials and the FSG representation. To overcome these, we develop a new framework of two-level hierarchy that is a more rigorous and accurate successor to the eFF method. The fundamental level, GHA-QM, is based on a new set of Pauli potentials that renders exact QM level of accuracy for any FSG represented electron systems. To achieve this, we start with using exactly derived energy expressions for the same spin electron pair, and fitting a simple functional form, inspired by DFT, against open singlet electron pair curves (H2 systems). Symmetric and asymmetric scaling factors are then introduced at this level to recover the QM total energies of multiple electron pair systems from the sum of local interactions. To complement the imperfect FSG representation, the AMPERE extension is implemented, and aims at embedding the interactions associated with both the cusp condition and explicit nodal structures. The whole GHA-QM+AMPERE framework is tested on H element, and the preliminary results are promising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Numerical solutions to the S-limit equations for the helium ground state and excited triplet state and the hydride ion ground state are obtained with the second and fourth difference approximations. The results for the ground states are superior to previously reported values. The coupled equations resulting from the partial wave expansion of the exact helium atom wavefunction were solved giving accurate S-, P-, D-, F-, and G-limits. The G-limit is -2.90351 a.u. compared to the exact value of the energy of -2.90372 a.u.

Part II

The pair functions which determine the exact first-order wavefunction for the ground state of the three-electron atom are found with the matrix finite difference method. The second- and third-order energies for the (1s1s)1S, (1s2s)3S, and (1s2s)1S states of the two-electron atom are presented along with contour and perspective plots of the pair functions. The total energy for the three-electron atom with a nuclear charge Z is found to be E(Z) = -1.125•Z2 +1.022805•Z-0.408138-0.025515•(1/Z)+O(1/Z2)a.u.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.

This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.

Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation is concerned with various fundamental aspects of the linearized dynamical theory for mechanically homogeneous and isotropic elastic solids. First, the uniqueness and reciprocal theorems of dynamic elasticity are extended to unbounded domains with the aid of a generalized energy identity and a lemma on the prolonged quiescence of the far field, which are established for this purpose. Next, the basic singular solutions of elastodynamics are studied and used to generate systematically Love's integral identity for the displacement field, as well as an associated identity for the field of stress. These results, in conjunction with suitably defined Green's functions, are applied to the construction of integral representations for the solution of the first and second boundary-initial value problem. Finally, a uniqueness theorem for dynamic concentrated-load problems is obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The pattern of energy release during the Imperial Valley, California, earthquake of 1940 is studied by analysing the El Centro strong motion seismograph record and records from the Tinemaha seismograph station, 546 km from the epicenter. The earthquake was a multiple event sequence with at least 4 events recorded at El Centro in the first 25 seconds, followed by 9 events recorded in the next 5 minutes. Clear P, S and surface waves were observed on the strong motion record. Although the main part of the earthquake energy was released during the first 15 seconds, some of the later events were as large as M = 5.8 and thus are important for earthquake engineering studies. The moment calculated using Fourier analysis of surface waves agrees with the moment estimated from field measurements of fault offset after the earthquake. The earthquake engineering significance of the complex pattern of energy release is discussed. It is concluded that a cumulative increase in amplitudes of building vibration resulting from the present sequence of shocks would be significant only for structures with relatively long natural period of vibration. However, progressive weakening effects may also lead to greater damage for multiple event earthquakes.

The model with surface Love waves propagating through a single layer as a surface wave guide is studied. It is expected that the derived properties for this simple model illustrate well several phenomena associated with strong earthquake ground motion. First, it is shown that a surface layer, or several layers, will cause the main part of the high frequency energy, radiated from the nearby earthquake, to be confined to the layer as a wave guide. The existence of the surface layer will thus increase the rate of the energy transfer into the man-made structures on or near the surface of the layer. Secondly, the surface amplitude of the guided SH waves will decrease if the energy of the wave is essentially confined to the layer and if the wave propagates towards an increasing layer thickness. It is also shown that the constructive interference of SH waves will cause the zeroes and the peaks in the Fourier amplitude spectrum of the surface ground motion to be continuously displaced towards the longer periods as the distance from the source of the energy release increases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of intermolecular coupling in molecular energy levels (electronic and vibrational) has been investigated in neat and isotopic mixed crystals of benzene. In the isotopic mixed crystals of C6H6, C6H5D, m-C6H4D2, p-C6H4D2, sym-C6H3D3, C6D5H, and C6D6 in either a C6H6 or C6D6 host, the following phenomena have been observed and interpreted in terms of a refined Frenkel exciton theory: a) Site shifts; b) site group splittings of the degenerate ground state vibrations of C6H6, C6D6, and sym-C6H3D3; c) the orientational effect for the isotopes without a trigonal axis in both the 1B2u electronic state and the ground state vibrations; d) intrasite Fermi resonance between molecular fundamentals due to the reduced symmetry of the crystal site; and e) intermolecular or intersite Fermi resonance between nearly degenerate states of the host and guest molecules. In the neat crystal experiments on the ground state vibrations it was possible to observe many of these phenomena in conjunction with and in addition to the exciton structure.

To theoretically interpret these diverse experimental data, the concepts of interchange symmetry, the ideal mixed crystal, and site wave functions have been developed and are presented in detail. In the interpretation of the exciton data the relative signs of the intermolecular coupling constants have been emphasized, and in the limit of the ideal mixed crystal a technique is discussed for locating the exciton band center or unobserved exciton components. A differentiation between static and dynamic interactions is made in the Frenkel limit which enables the concepts of site effects and exciton coupling to be sharpened. It is thus possible to treat the crystal induced effects in such a fashion as to make their similarities and differences quite apparent.

A calculation of the ground state vibrational phenomena (site shifts and splittings, orientational effects, and exciton structure) and of the crystal lattice modes has been carried out for these systems. This calculation serves as a test of the approximations of first order Frenkel theory and the atom-atom, pair wise interaction model for the intermolecular potentials. The general form of the potential employed was V(r) = Be-Cr - A/r6 ; the force constants were obtained from the potential by assuming the atoms were undergoing simple harmonic motion.

In part II the location and identification of the benzene first and second triplet states (3B1u and 3E1u) is given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proper encoding of transmitted information can improve the performance of a communication system. To recover the information at the receiver it is necessary to decode the received signal. For many codes the complexity and slowness of the decoder is so severe that the code is not feasible for practical use. This thesis considers the decoding problem for one such class of codes, the comma-free codes related to the first-order Reed-Muller codes.

A factorization of the code matrix is found which leads to a simple, fast, minimum memory, decoder. The decoder is modular and only n modules are needed to decode a code of length 2n. The relevant factorization is extended to any code defined by a sequence of Kronecker products.

The problem of monitoring the correct synchronization position is also considered. A general answer seems to depend upon more detailed knowledge of the structure of comma-free codes. However, a technique is presented which gives useful results in many specific cases.