23 resultados para moments exacts
Resumo:
The magnetic moments of amorphous ternary alloys containing Pd, Co and Si in atomic concentrations corresponding to Pd_(80-x)Co_xSi_(20) in which x is 3, 5, 7, 9, 10 and 11, have been measured between 1.8 and 300°K and in magnetic fields up to 8.35 kOe. The alloys were obtained by rapid quenching of a liquid droplet and their structures were analyzed by X-ray diffraction. The measurements were made in a null-coil pendulum magnetometer in which the temperature could be varied continuously without immersing the sample in a cryogenic liquid. The alloys containing 9 at.% Co or less obeyed Curie's Law over certain temperature ranges, and had negligible permanent moments at room temperature. Those containing 10 and 11 at.% Co followed Curie's Law only above approximately 200°K and had significant permanent moments at room temperature. For all alloys, the moments calculated from Curie's Law were too high to be accounted for by the moments of individual Co atoms. To explain these findings, a model based on the existence of superparamagnetic clustering is proposed. The cluster sizes calculated from the model are consistent with the rapid onset of ferromagnetism in the alloys containing 10 and 11 at.% Co and with the magnetic moments in an alloy containing 7 at.% Co heat treated in such a manner as to contain a small amount of a crystalline phase. In alloys containing 7 at.% Co or less, a maximum in the magnetization vs temperature curve was observed around 10°K. This maximum was eliminated by cooling the alloy in a magnetic field, and an explanation for this observation is suggested.
Resumo:
This thesis consists of two parts. In Part I, we develop a multipole moment formalism in general relativity and use it to analyze the motion and precession of compact bodies. More specifically, the generic, vacuum, dynamical gravitational field of the exterior universe in the vicinity of a freely moving body is expanded in positive powers of the distance r away from the body's spatial origin (i.e., in the distance r from its timelike-geodesic world line). The expansion coefficients, called "external multipole moments,'' are defined covariantly in terms of the Riemann curvature tensor and its spatial derivatives evaluated on the body's central world line. In a carefully chosen class of de Donder coordinates, the expansion of the external field involves only integral powers of r ; no logarithmic terms occur. The expansion is used to derive higher-order corrections to previously known laws of motion and precession for black holes and other bodies. The resulting laws of motion and precession are expressed in terms of couplings of the time derivatives of the body's quadrupole and octopole moments to the external moments, i.e., to the external curvature and its gradient.
In part II, we study the interaction of magnetohydrodynamic (MHD) waves in a black-hole magnetosphere with the "dragging of inertial frames" effect of the hole's rotation - i.e., with the hole's "gravitomagnetic field." More specifically: we first rewrite the laws of perfect general relativistic magnetohydrodynamics (GRMHD) in 3+1 language in a general spacetime, in terms of quantities (magnetic field, flow velocity, ...) that would be measured by the ''fiducial observers” whose world lines are orthogonal to (arbitrarily chosen) hypersurfaces of constant time. We then specialize to a stationary spacetime and MHD flow with one arbitrary spatial symmetry (e.g., the stationary magnetosphere of a Kerr black hole); and for this spacetime we reduce the GRMHD equations to a set of algebraic equations. The general features of the resulting stationary, symmetric GRMHD magnetospheric solutions are discussed, including the Blandford-Znajek effect in which the gravitomagnetic field interacts with the magnetosphere to produce an outflowing jet. Then in a specific model spacetime with two spatial symmetries, which captures the key features of the Kerr geometry, we derive the GRMHD equations which govern weak, linealized perturbations of a stationary magnetosphere with outflowing jet. These perturbation equations are then Fourier analyzed in time t and in the symmetry coordinate x, and subsequently solved numerically. The numerical solutions describe the interaction of MHD waves with the gravitomagnetic field. It is found that, among other features, when an oscillatory external force is applied to the region of the magnetosphere where plasma (e+e-) is being created, the magnetosphere responds especially strongly at a particular, resonant, driving frequency. The resonant frequency is that for which the perturbations appear to be stationary (time independent) in the common rest frame of the freshly created plasma and the rotating magnetic field lines. The magnetosphere of a rotating black hole, when buffeted by nonaxisymmetric magnetic fields anchored in a surrounding accretion disk, might exhibit an analogous resonance. If so then the hole's outflowing jet might be modulated at resonant frequencies ω=(m/2) ΩH where m is an integer and ΩH is the hole's angular velocity.
Resumo:
The determination of the energy levels and the probabilities of transition between them, by the formal analysis of observed electronic, vibrational, and rotational band structures, forms the direct goal of all investigations of molecular spectra, but the significance of such data lies in the possibility of relating them theoretically to more concrete properties of molecules and the radiation field. From the well developed electronic spectra of diatomic molecules, it has been possible, with the aid of the non-relativistic quantum mechanics, to obtain accurate moments of inertia, molecular potential functions, electronic structures, and detailed information concerning the coupling of spin and orbital angular monenta with the angular momentum of nuclear rotation. The silicon fluori1e molecule has been investigated in this laboratory, and is found to emit bands whose vibrational and rotational structures can be analyzed in this detailed fashion.
Like silicon fluoride, however, the great majority of diatomic molecules are formed only under the unusual conditions of electrical discharge, or in high temperature furnaces, so that although their spectra are of great theoretical interest, the chemist is eager to proceed to a study of polyatomic molecules, in the hope that their more practically interesting structures might also be determined with the accuracy and assurance which characterize the spectroscopic determinations of the constants of diatomic molecules. Some progress has been made in the determination of molecule potential functions from the vibrational term values deduced from Raman and infrared spectra, but in no case can the calculations be carried out with great generality, since the number of known term values is always small compared with the total number of potential constants in even so restricted a potential function as the simple quadratic type. For the determination of nuclear configurations and bond distances, however, a knowledge of the rotational terms is required. The spectra of about twelve of the simpler polyatomic molecules have been subjected to rotational analyses, and a number of bond distances are known with considerable accuracy, yet the number of molecules whose rotational fine structure has been resolved even with the most powerful instruments is small. Consequently, it was felt desirable to investigate the spectra of a number of other promising polyatomic molecules, with the purpose of carrying out complete rotational analyses of all resolvable bands, and ascertaining the value of the unresolved band envelopes in determining the structures of such molecules, in the cases in which resolution is no longer possible. Although many of the compounds investigated absorbed too feebly to be photographed under high dispersion with the present infrared sensitizations, the location and relative intensities of their bands, determined by low dispersion measurements, will be reported in the hope that these compounds may be reinvestigated in the future with improved techniques.
Resumo:
This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.
In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.
In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.
In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.
Resumo:
A novel spectroscopy of trapped ions is proposed which will bring single-ion detection sensitivity to the observation of magnetic resonance spectra. The approaches developed here are aimed at resolving one of the fundamental problems of molecular spectroscopy, the apparent incompatibility in existing techniques between high information content (and therefore good species discrimination) and high sensitivity. Methods for studying both electron spin resonance (ESR) and nuclear magnetic resonance (NMR) are designed. They assume established methods for trapping ions in high magnetic field and observing the trapping frequencies with high resolution (<1 Hz) and sensitivity (single ion) by electrical means. The introduction of a magnetic bottle field gradient couples the spin and spatial motions together and leads to a small spin-dependent force on the ion, which has been exploited by Dehmelt to observe directly the perturbation of the ground-state electron's axial frequency by its spin magnetic moment.
A series of fundamental innovations is described m order to extend magnetic resonance to the higher masses of molecular ions (100 amu = 2x 10^5 electron masses) and smaller magnetic moments (nuclear moments = 10^(-3) of the electron moment). First, it is demonstrated how time-domain trapping frequency observations before and after magnetic resonance can be used to make cooling of the particle to its ground state unnecessary. Second, adiabatic cycling of the magnetic bottle off between detection periods is shown to be practical and to allow high-resolution magnetic resonance to be encoded pointwise as the presence or absence of trapping frequency shifts. Third, methods of inducing spindependent work on the ion orbits with magnetic field gradients and Larmor frequency irradiation are proposed which greatly amplify the attainable shifts in trapping frequency.
The dissertation explores the basic concepts behind ion trapping, adopting a variety of classical, semiclassical, numerical, and quantum mechanical approaches to derive spin-dependent effects, design experimental sequences, and corroborate results from one approach with those from another. The first proposal presented builds on Dehmelt's experiment by combining a "before and after" detection sequence with novel signal processing to reveal ESR spectra. A more powerful technique for ESR is then designed which uses axially synchronized spin transitions to perform spin-dependent work in the presence of a magnetic bottle, which also converts axial amplitude changes into cyclotron frequency shifts. A third use of the magnetic bottle is to selectively trap ions with small initial kinetic energy. A dechirping algorithm corrects for undesired frequency shifts associated with damping by the measurement process.
The most general approach presented is spin-locked internally resonant ion cyclotron excitation, a true continuous Stern-Gerlach effect. A magnetic field gradient modulated at both the Larmor and cyclotron frequencies is devised which leads to cyclotron acceleration proportional to the transverse magnetic moment of a coherent state of the particle and radiation field. A preferred method of using this to observe NMR as an axial frequency shift is described in detail. In the course of this derivation, a new quantum mechanical description of ion cyclotron resonance is presented which is easily combined with spin degrees of freedom to provide a full description of the proposals.
Practical, technical, and experimental issues surrounding the feasibility of the proposals are addressed throughout the dissertation. Numerical ion trajectory simulations and analytical models are used to predict the effectiveness of the new designs as well as their sensitivity and resolution. These checks on the methods proposed provide convincing evidence of their promise in extending the wealth of magnetic resonance information to the study of collisionless ions via single-ion spectroscopy.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
Ternary alloys of nickel-palladium-phosphorus and iron-palladium- phosphorus containing 20 atomic % phosphorus were rapidly quenched from the liquid state. The structure of the quenched alloys was investigated by X-ray diffraction. Broad maxima in the diffraction patterns, indicative of a glass-like structure, were obtained for 13 to 73 atomic % nickel and 13 to 44 atomic % iron, with palladium adding up to 80%.
Radial distribution functions were computed from the diffraction data and yielded average interatomic distances and coordination numbers. The structure of the amorphous alloys could be explained in terms of structural units analogous to those existing in the crystalline Pd3P, Ni3P and Fe3P phases, with iron or nickel substituting for palladium. A linear relationship between interatomic distances and composition, similar to Vegard's law, was shown for these metallic glasses.
Electrical resistivity measurements showed that the quenched alloys were metallic. Measurements were performed from liquid helium temperatures (4.2°K) up to the vicinity of the melting points (900°K- 1000°K). The temperature coefficient in the glassy state was very low, of the order of 10-4/°K. A resistivity minimum was found at low temperature, varying between 9°K and 14°K for Nix-Pd80-x -P20 and between 17°K and 96°K for Fex-Pd80-x -P20, indicating the presence of a Kondo effect. Resistivity measurements, with a constant heating rate of about 1.5°C/min,showed progressive crystallization above approximately 600°K.
The magnetic moments of the amorphous Fe-Pd-P alloys were measured as a function of magnetic field and temperature. True ferromagnetism was found for the alloys Fe32-Pd48-P20 and Fe44-Pd36-P20 with Curie points at 165° K and 380° K respectively. Extrapolated values of the saturation magnetic moments to 0° K were 1.70 µB and 2.10 µB respectively. The amorphous alloy Fe23-Pd57-P20 was assumed to be superparamagnetic. The experimental data indicate that phosphorus contributes to the decrease of moments by electron transfer, whereas palladium atoms probably have a small magnetic moment. A preliminary investigation of the Ni-Pd-P amorphous alloys showed that these alloys are weakly paramagnetic.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.
The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.
A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
This thesis consists of two separate parts. Part I (Chapter 1) is concerned with seismotectonics of the Middle America subduction zone. In this chapter, stress distribution and Benioff zone geometry are investigated along almost 2000 km of this subduction zone, from the Rivera Fracture Zone in the north to Guatemala in the south. Particular emphasis is placed on the effects on stress distribution of two aseismic ridges, the Tehuantepec Ridge and the Orozco Fracture Zone, which subduct at seismic gaps. Stress distribution is determined by studying seismicity distribution, and by analysis of 190 focal mechanisms, both new and previously published, which are collected here. In addition, two recent large earthquakes that have occurred near the Tehuantepec Ridge and the Orozco Fracture Zone are discussed in more detail. A consistent stress release pattern is found along most of the Middle America subduction zone: thrust events at shallow depths, followed down-dip by an area of low seismic activity, followed by a zone of normal events at over 175 km from the trench and 60 km depth. The zone of low activity is interpreted as showing decoupling of the plates, and the zone of normal activity as showing the breakup of the descending plate. The portion of subducted lithosphere containing the Orozco Fracture Zone does not differ significantly, in Benioff zone geometry or in stress distribution, from adjoining segments. The Playa Azul earthquake of October 25, 1981, Ms=7.3, occurred in this area. Body and surface wave analysis of this event shows a simple source with a shallow thrust mechanism and gives Mo=1.3x1027 dyne-cm. A stress drop of about 45 bars is calculated; this is slightly higher than that of other thrust events in this subduction zone. In the Tehuantepec Ridge area, only minor differences in stress distribution are seen relative to adjoining segments. For both ridges, the only major difference from adjoining areas is the infrequency or lack of occurrence of large interplate thrust events.
Part II involves upper mantle P wave structure studies, for the Canadian shield and eastern North America. In Chapter 2, the P wave structure of the Canadian shield is determined through forward waveform modeling of the phases Pnl, P, and PP. Effects of lateral heterogeneity are kept to a minimum by using earthquakes just outside the shield as sources, with propagation paths largely within the shield. Previous mantle structure studies have used recordings of P waves in the upper mantle triplication range of 15-30°; however, the lack of large earthquakes in the shield region makes compilation of a complete P wave dataset difficult. By using the phase PP, which undergoes triplications at 30-60°, much more information becomes available. The WKBJ technique is used to calculate synthetic seismograms for PP, and these records are modeled almost as well as the P. A new velocity model, designated S25, is proposed for the Canadian shield. This model contains a thick, high-Q, high-velocity lid to 165 km and a deep low-velocity zone. These features combine to produce seismograms that are markedly different from those generated by other shield structure models. The upper mantle discontinuities in S25 are placed at 405 and 660 km, with a simple linear gradient in velocity between them. Details of the shape of the discontinuities are not well constrained. Below 405 km, this model is not very different from many proposed P wave models for both shield and tectonic regions.
Chapter 3 looks in more detail at recordings of Pnl in eastern North America. First, seismograms from four eastern North American earthquakes are analyzed, and seismic moments for the events are calculated. These earthquakes are important in that they are among the largest to have occurred in eastern North America in the last thirty years, yet in some cases were not large enough to produce many good long-period teleseismic records. A simple layer-over-a-halfspace model is used for the initial modeling, and is found to provide an excellent fit for many features of the observed waveforms. The effects on Pnl of varying lid structure are then investigated. A thick lid with a positive gradient in velocity, such as that proposed for the Canadian shield in Chapter 2, will have a pronounced effect on the waveforms, beginning at distances of 800 or 900 km. Pnl records from the same eastern North American events are recalculated for several lid structure models, to survey what kinds of variations might be seen. For several records it is possible to see likely effects of lid structure in the data. However, the dataset is too sparse to make any general observations about variations in lid structure. This type of modeling is expected to be important in the future, as the analysis is extended to more recent eastern North American events, and as broadband instruments make more high-quality regional recordings available.
Resumo:
The problem of s-d exchange scattering of conduction electrons off localized magnetic moments in dilute magnetic alloys is considered employing formal methods of quantum field theoretical scattering. It is shown that such a treatment not only allows for the first time, the inclusion of multiparticle intermediate states in single particle scattering equations but also results in extremely simple and straight forward mathematical analysis. These equations are proved to be exact in the thermodynamic limit. A self-consistent integral equation for electron self energy is derived and approximately solved. The ground state and physical parameters of dilute magnetic alloys are discussed in terms of the theoretical results. Within the approximation of single particle intermediate states our results reduce to earlier versions. The following additional features are found as a consequence of the inclusion of multiparticle intermediate states;
(i) A non analytic binding energy is pre sent for both, antiferromagnetic (J < o) and ferromagnetic (J > o) couplings of the electron plus impurity system.
(ii) The correct behavior of the energy difference of the conduction electron plus impurity system and the free electron system is found which is free of unphysical singularities present in earlier versions of the theories.
(iii) The ground state of the conduction electron plus impurity system is shown to be a many-body condensate state for J < o and J > o, both. However, a distinction is made between the usual terminology of "Singlet" and "Triplet" ground states and nature of our ground state.
(iv) It is shown that a long range ordering, leading to an ordering of the magnetic moments can result from a contact interaction such as the s-d exchange interaction.
(v) The explicit dependence of the excess specific heat of the Kondo systems is obtained and found to be linear in temperatures as T→ o and T ℓnT for 0.3 T_K ≤ T ≤ 0.6 T_K. A rise in (ΔC/T) for temperatures in the region 0 < T ≤ 0.1 T_K is predicted. These results are found to be in excellent agreement with experiments.
(vi) The existence of a critical temperature for Ferromagnetic coupling (J > o) is shown. On the basis of this the apparent contradiction of the simultaneous existence of giant moments and Kondo effect is resolved.
Resumo:
The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.
The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.
The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.
In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.
In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.
The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.
Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.
Resumo:
A hydromechanical theory is developed for cycloidal propellers for two limiting modes of operation wherein U » ΩR and U « ΩR, with U the rectilinear propeller speed (speed of advance) and ΩR the rotational blade speed. A first order theory is developed from the basic principles of the kinematics and dynamics of fluid motion and proceeds from the point of view of unsteady hydrofoil theory.
Explicit expressions for the instantaneous forces and moments produced by blade motions are presented. On the basis of these results an optimization procedure is carried out which minimizes the energy loss under the constraint of specified mean thrust. Under optimal conditions the propeller is found to possess high Froude efficiencies in both the high and low speed modes of propulsion. This efficiency is defined as the ratio of the average useful work obtained during one cycle of propeller operation to the average power input required to sustain the motion of the propeller during the cycle.
Resumo:
An attempt is made to provide a theoretical explanation of the effect of the positive column on the voltage-current characteristic of a glow or an arc discharge. Such theories have been developed before, and all are based on balancing the production and loss of charged particles and accounting for the energy supplied to the plasma by the applied electric field. Differences among the theories arise from the approximations and omissions made in selecting processes that affect the particle and energy balances. This work is primarily concerned with the deviation from the ambipolar description of the positive column caused by space charge, electron-ion volume recombination, and temperature inhomogeneities.
The presentation is divided into three parts, the first of which involved the derivation of the final macroscopic equations from kinetic theory. The final equations are obtained by taking the first three moments of the Boltzmann equation for each of the three species in the plasma. Although the method used and the equations obtained are not novel, the derivation is carried out in detail in order to appraise the validity of numerous approximations and to justify the use of data from other sources. The equations are applied to a molecular hydrogen discharge contained between parallel walls. The applied electric field is parallel to the walls, and the dependent variables—electron and ion flux to the walls, electron and ion densities, transverse electric field, and gas temperature—vary only in the direction perpendicular to the walls. The mathematical description is given by a sixth-order nonlinear two-point boundary value problem which contains the applied field as a parameter. The amount of neutral gas and its temperature at the walls are held fixed, and the relation between the applied field and the electron density at the center of the discharge is obtained in the process of solving the problem. This relation corresponds to that between current and voltage and is used to interpret the effect of space charge, recombination, and temperature inhomogeneities on the voltage-current characteristic of the discharge.
The complete solution of the equations is impractical both numerically and analytically, and in Part II the gas temperature is assumed uniform so as to focus on the combined effects of space charge and recombination. The terms representing these effects are treated as perturbations to equations that would otherwise describe the ambipolar situation. However, the term representing space charge is not negligible in a thin boundary layer or sheath near the walls, and consequently the perturbation problem is singular. Separate solutions must be obtained in the sheath and in the main region of the discharge, and the relation between the electron density and the applied field is not determined until these solutions are matched.
In Part III the electron and ion densities are assumed equal, and the complicated space-charge calculation is thereby replaced by the ambipolar description. Recombination and temperature inhomogeneities are both important at high values of the electron density. However, the formulation of the problem permits a comparison of the relative effects, and temperature inhomogeneities are shown to be important at lower values of the electron density than recombination. The equations are solved by a direct numerical integration and by treating the term representing temperature inhomogeneities as a perturbation.
The conclusions reached in the study are primarily concerned with the association of the relation between electron density and axial field with the voltage-current characteristic. It is known that the effect of space charge can account for the subnormal glow discharge and that the normal glow corresponds to a close approach to an ambipolar situation. The effect of temperature inhomogeneities helps explain the decreasing characteristic of the arc, and the effect of recombination is not expected to appear except at very high electron densities.