10 resultados para Anomalies in field and string theories
em CaltechTHESIS
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Theoretical and experimental studies of a gas laser amplifier are presented, assuming the amplifier is operating with a saturating optical frequency signal. The analysis is primarily concerned with the effects of the gas pressure and the presence of an axial magnetic field on the characteristics of the amplifying medium. Semiclassical radiation theory is used, along with a density matrix description of the atomic medium which relates the motion of single atoms to the macroscopic observables. A two-level description of the atom, using phenomenological source rates and decay rates, forms the basis of our analysis of the gas laser medium. Pressure effects are taken into account to a large extent through suitable choices of decay rate parameters.
Two methods for calculating the induced polarization of the atomic medium are used. The first method utilizes a perturbation expansion which is valid for signal intensities which barely reach saturation strength, and it is quite general in applicability. The second method is valid for arbitrarily strong signals, but it yields tractable solutions only for zero magnetic field or for axial magnetic fields large enough such that the Zeeman splitting is much larger than the power broadened homogeneous linewidth of the laser transition. The effects of pressure broadening of the homogeneous spectral linewidth are included in both the weak-signal and strong-signal theories; however the effects of Zeeman sublevel-mixing collisions are taken into account only in the weak-signal theory.
The behavior of a He-Ne gas laser amplifier in the presence of an axial magnetic field has been studied experimentally by measuring gain and Faraday rotation of linearly polarized resonant laser signals for various values of input signal intensity, and by measuring nonlinearity - induced anisotropy for elliptically polarized resonant laser signals of various input intensities. Two high-gain transitions in the 3.39-μ region were used for study: a J = 1 to J = 2 (3s2 → 3p4) transition and a J = 1 to J = 1 (3s2 → 3p2) transition. The input signals were tuned to the centers of their respective resonant gain lines.
The experimental results agree quite well with corresponding theoretical expressions which have been developed to include the nonlinear effects of saturation strength signals. The experimental results clearly show saturation of Faraday rotation, and for the J = 1 t o J = 1 transition a Faraday rotation reversal and a traveling wave gain dip are seen for small values of axial magnetic field. The nonlinearity induced anisotropy shows a marked dependence on the gas pressure in the amplifier tube for the J = 1 to J = 2 transition; this dependence agrees with the predictions of the general perturbational or weak signal theory when allowances are made for the effects of Zeeman sublevel-mixing collisions. The results provide a method for measuring the upper (neon 3s2) level quadrupole moment decay rate, the dipole moment decay rates for the 3s2 → 3p4 and 3s2 → 3p2 transitions, and the effects of various types of collision processes on these decay rates.
Resumo:
This thesis examines foundational questions in behavioral economics—also called psychology and economics—and the neural foundations of varied sources of utility. We have three primary aims: First, to provide the field of behavioral economics with psychological theories of behavior that are derived from neuroscience and to use those theories to identify novel evidence for behavioral biases. Second, we provide neural and micro foundations of behavioral preferences that give rise to well-documented empirical phenomena in behavioral economics. Finally, we show how a deep understanding of the neural foundations of these behavioral preferences can feed back into our theories of social preferences and reference-dependent utility.
The first chapter focuses on classical conditioning and its application in identifying the psychological underpinnings of a pricing phenomenon. We return to classical conditioning again in the third chapter where we use fMRI to identify varied sources of utility—here, reference dependent versus direct utility—and cross-validate our interpretation with a conditioning experiment. The second chapter engages social preferences and, more broadly, causative utility (wherein the decision-maker derives utility from making or avoiding particular choices).
Resumo:
The various singularities and instabilities which arise in the modulation theory of dispersive wavetrains are studied. Primary interest is in the theory of nonlinear waves, but a study of associated questions in linear theory provides background information and is of independent interest.
The full modulation theory is developed in general terms. In the first approximation for slow modulations, the modulation equations are solved. In both the linear and nonlinear theories, singularities and regions of multivalued modulations are predicted. Higher order effects are considered to evaluate this first order theory. An improved approximation is presented which gives the true behavior in the singular regions. For the linear case, the end result can be interpreted as the overlap of elementary wavetrains. In the nonlinear case, it is found that a sufficiently strong nonlinearity prevents this overlap. Transition zones with a predictable structure replace the singular regions.
For linear problems, exact solutions are found by Fourier integrals and other superposition techniques. These show the true behavior when breaking modulations are predicted.
A numerical study is made for the anharmonic lattice to assess the nonlinear theory. This confirms the theoretical predictions of nonlinear group velocities, group splitting, and wavetrain instability, as well as higher order effects in the singular regions.
Resumo:
Optical microscopy is an essential tool in biological science and one of the gold standards for medical examinations. Miniaturization of microscopes can be a crucial stepping stone towards realizing compact, cost-effective and portable platforms for biomedical research and healthcare. This thesis reports on implementations of bright-field and fluorescence chip-scale microscopes for a variety of biological imaging applications. The term “chip-scale microscopy” refers to lensless imaging techniques realized in the form of mass-producible semiconductor devices, which transforms the fundamental design of optical microscopes.
Our strategy for chip-scale microscopy involves utilization of low-cost Complementary metal Oxide Semiconductor (CMOS) image sensors, computational image processing and micro-fabricated structural components. First, the sub-pixel resolving optofluidic microscope (SROFM), will be presented, which combines microfluidics and pixel super-resolution image reconstruction to perform high-throughput imaging of fluidic samples, such as blood cells. We discuss design parameters and construction of the device, as well as the resulting images and the resolution of the device, which was 0.66 µm at the highest acuity. The potential applications of SROFM for clinical diagnosis of malaria in the resource-limited settings is discussed.
Next, the implementations of ePetri, a self-imaging Petri dish platform with microscopy resolution, are presented. Here, we simply place the sample of interest on the surface of the image sensor and capture the direct shadow images under the illumination. By taking advantage of the inherent motion of the microorganisms, we achieve high resolution (~1 µm) imaging and long term culture of motile microorganisms over ultra large field-of-view (5.7 mm × 4.4 mm) in a specialized ePetri platform. We apply the pixel super-resolution reconstruction to a set of low-resolution shadow images of the microorganisms as they move across the sensing area of an image sensor chip and render an improved resolution image. We perform longitudinal study of Euglena gracilis cultured in an ePetri platform and image based analysis on the motion and morphology of the cells. The ePetri device for imaging non-motile cells are also demonstrated, by using the sweeping illumination of a light emitting diode (LED) matrix for pixel super-resolution reconstruction of sub-pixel shifted shadow images. Using this prototype device, we demonstrate the detection of waterborne parasites for the effective diagnosis of enteric parasite infection in resource-limited settings.
Then, we demonstrate the adaptation of a smartphone’s camera to function as a compact lensless microscope, which uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is also based on the image reconstruction with sweeping illumination technique, where the sequence of images are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. Image acquisition and reconstruction is performed on the device using a custom-built android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.
Finally, we report on the implementation of fluorescence chip-scale microscope, based on a silo-filter structure fabricated on the pixel array of a CMOS image sensor. The extruded pixel design with metal walls between neighboring pixels successfully guides fluorescence emission through the thick absorptive filter to the photodiode layer of a pixel. Our silo-filter CMOS image sensor prototype achieves 13-µm resolution for fluorescence imaging over a wide field-of-view (4.8 mm × 4.4 mm). Here, we demonstrate bright-field and fluorescence longitudinal imaging of living cells in a compact, low-cost configuration.
Resumo:
This thesis puts forth a theory-directed approach coupled with spectroscopy aimed at the discovery and understanding of light-matter interactions in semiconductors and metals.
The first part of the thesis presents the discovery and development of Zn-IV nitride materials.The commercial prominence in the optoelectronics industry of tunable semiconductor alloy materials based on nitride semiconductor devices, specifically InGaN, motivates the search for earth-abundant alternatives for use in efficient, high-quality optoelectronic devices. II-IV-N2 compounds, which are closely related to the wurtzite-structured III-N semiconductors, have similar electronic and optical properties to InGaN namely direct band gaps, high quantum efficiencies and large optical absorption coefficients. The choice of different group II and group IV elements provides chemical diversity that can be exploited to tune the structural and electronic properties through the series of alloys. The first theoretical and experimental investigation of the ZnSnxGe1−xN2 series as a replacement for III-nitrides is discussed here.
The second half of the thesis shows ab−initio calculations for surface plasmons and plasmonic hot carrier dynamics. Surface plasmons, electromagnetic modes confined to the surface of a conductor-dielectric interface, have sparked renewed interest because of their quantum nature and their broad range of applications. The decay of surface plasmons is usually a detriment in the field of plasmonics, but the possibility to capture the energy normally lost to heat would open new opportunities in photon sensors, energy conversion devices and switching. A theoretical understanding of plasmon-driven hot carrier generation and relaxation dynamics in the ultrafast regime is presented here. Additionally calculations for plasmon-mediated upconversion as well as an energy-dependent transport model for these non-equilibrium carriers are shown.
Finally, this thesis gives an outlook on the potential of non-equilibrium phenomena in metals and semiconductors for future light-based technologies.
Resumo:
This thesis aims at enhancing our fundamental understanding of the East Asian summer monsoon (EASM), and mechanisms implicated in its climatology in present-day and warmer climates. We focus on the most prominent feature of the EASM, i.e., the so-called Meiyu-Baiu (MB), which is characterized by a well-defined, southwest to northeast elongated quasi-stationary rainfall band, spanning from eastern China to Japan and into the northwestern Pacific Ocean in June and July.
We begin with an observational study of the energetics of the MB front in present-day climate. Analyses of the moist static energy (MSE) budget of the MB front indicate that horizontal advection of moist enthalpy, primarily of dry enthalpy, sustains the front in a region of otherwise negative net energy input into the atmospheric column. A decomposition of the horizontal dry enthalpy advection into mean, transient, and stationary eddy fluxes identifies the longitudinal thermal gradient due to zonal asymmetries and the meridional stationary eddy velocity as the most influential factors determining the pattern of horizontal moist enthalpy advection. Numerical simulations in which the Tibetan Plateau (TP) is either retained or removed show that the TP influences the stationary enthalpy flux, and hence the MB front, primarily by changing the meridional stationary eddy velocity, with reinforced southerly wind on the northwestern flank of the north Pacific subtropical high (NPSH) over the MB region and northerly wind to its north. Changes in the longitudinal thermal gradient are mainly confined to the near downstream of the TP, with the resulting changes in zonal warm air advection having a lesser impact on the rainfall in the extended MB region.
Similar mechanisms are shown to be implicated in present climate simulations in the Couple Model Intercomparison Project - Phase 5 (CMIP5) models. We find that the spatial distribution of the EASM precipitation simulated by different models is highly correlated with the meridional stationary eddy velocity. The correlation becomes more robust when energy fluxes into the atmospheric column are considered, consistent with the observational analyses. The spread in the area-averaged rainfall amount can be partially explained by the spread in the simulated globally-averaged precipitation, with the rest primarily due to the lower-level meridional wind convergence. Clear relationships between precipitation and zonal and meridional eddy velocities are observed.
Finally, the response of the EASM to greenhouse gas forcing is investigated at different time scales in CMIP5 model simulations. The reduction of radiative cooling and the increase in continental surface temperature occur much more rapidly than changes in sea surface temperatures (SSTs). Without changes in SSTs, the rainfall in the monsoon region decreases (increases) over ocean (land) in most models. On longer time scales, as SSTs increase, rainfall changes are opposite. The total response to atmospheric CO^2 forcing and subsequent SST warming is a large (modest) increase in rainfall over ocean (land) in the EASM region. Dynamic changes, in spite of significant contributions from the thermodynamic component, play an important role in setting up the spatial pattern of precipitation changes. Rainfall anomalies over East China are a direct consequence of local land-sea contrast, while changes in the larger-scale oceanic rainfall band are closely associated with the displacement of the larger-scale NPSH. Numerical simulations show that topography and SST patterns play an important role in rainfall changes in the EASM region.
Resumo:
This thesis is the culmination of field and laboratory studies aimed at assessing processes that affect the composition and distribution of atmospheric organic aerosol. An emphasis is placed on measurements conducted using compact and high-resolution Aerodyne Aerosol Mass Spectrometers (AMS). The first three chapters summarize results from aircraft campaigns designed to evaluate anthropogenic and biogenic impacts on marine aerosol and clouds off the coast of California. Subsequent chapters describe laboratory studies intended to evaluate gas and particle-phase mechanisms of organic aerosol oxidation.
The 2013 Nucleation in California Experiment (NiCE) was a campaign designed to study environments impacted by nucleated and/or freshly formed aerosol particles. Terrestrial biogenic aerosol with > 85% organic mass was observed to reside in the free troposphere above marine stratocumulus. This biogenic organic aerosol (BOA) originated from the Northwestern United States and was transported to the marine atmosphere during periodic cloud-clearing events. Spectra recorded by a cloud condensation nuclei counter demonstrated that BOA is CCN active. BOA enhancements at latitudes north of San Francisco, CA coincided with enhanced cloud water concentrations of organic species such as acetate and formate.
Airborne measurements conducted during the 2011 Eastern Pacific Emitted Aerosol Cloud Experiment (E-PEACE) were aimed at evaluating the contribution of ship emissions to the properties of marine aerosol and clouds off the coast of central California. In one study, analysis of organic aerosol mass spectra during periods of enhanced shipping activity yielded unique tracers indicative of cloud-processed ship emissions (m/z 42 and 99). The variation of their organic fraction (f42 and f99) was found to coincide with periods of heavy (f42 > 0.15; f99 > 0.04), moderate (0.05 < f42 < 0.15; 0.01 < f99 < 0.04), and negligible (f42 < 0.05; f99 < 0.01) ship influence. Application of these conditions to all measurements conducted during E-PEACE demonstrated that a large fraction of cloud droplet (72%) and dry aerosol mass (12%) sampled in the California coastal study region was heavily or moderately influenced by ship emissions. Another study investigated the chemical and physical evolution of a controlled organic plume emitted from the R/V Point Sur. Under sunny conditions, nucleated particles composed of oxidized organic compounds contributed nearly an order of magnitude more cloud condensation nuclei (CCN) than less oxidized particles formed under cloudy conditions. The processing time necessary for particles to become CCN active was short ( < 1 hr) compared to the time needed for particles to become hygroscopic at sub-saturated humidity ( > 4 hr).
Laboratory chamber experiments were also conducted to evaluate particle-phase processes influencing aerosol phase and composition. In one study, ammonium sulfate seed was coated with a layer of secondary organic aerosol (SOA) from toluene oxidation followed by a layer of SOA from α-pinene oxidation. The system exhibited different evaporative properties than ammonium sulfate seed initially coated with α-pinene SOA followed by a layer of toluene SOA. This behavior is consistent with a shell-and-core model and suggests limited mixing among different SOA types. Another study investigated the reactive uptake of isoprene epoxy diols (IEPOX) onto non-acidified aerosol. It was demonstrated that particle acidity has limited influence on organic aerosol formation onto ammonium sulfate seed, and that the chemical system is limited by the availability of nucleophiles such as sulfate.
Flow tube experiments were conducted to examine the role of iron in the reactive uptake and chemical oxidation of glycolaldehyde. Aerosol particles doped with iron and hydrogen peroxide were mixed with gas-phase glycolaldehyde and photochemically aged in a custom-built flow reactor. Compared to particles free of iron, iron-doped aerosols significantly enhanced the oxygen to carbon (O/C) ratio of accumulated organic mass. The primary oxidation mechanism is suggested to be a combination of Fenton and photo-Fenton reactions which enhance particle-phase OH radical concentrations.
Resumo:
This study concerns the longitudinal dispersion of fluid particles which are initially distributed uninformly over one cross section of a uniform, steady, turbulent open channel flow. The primary focus is on developing a method to predict the rate of dispersion in a natural stream.
Taylor's method of determining a dispersion coefficient, previously applied to flow in pipes and two-dimensional open channels, is extended to a class of three-dimensional flows which have large width-to-depth ratios, and in which the velocity varies continuously with lateral cross-sectional position. Most natural streams are included. The dispersion coefficient for a natural stream may be predicted from measurements of the channel cross-sectional geometry, the cross-sectional distribution of velocity, and the overall channel shear velocity. Tracer experiments are not required.
Large values of the dimensionless dispersion coefficient D/rU* are explained by lateral variations in downstream velocity. In effect, the characteristic length of the cross section is shown to be proportional to the width, rather than the hydraulic radius. The dimensionless dispersion coefficient depends approximately on the square of the width to depth ratio.
A numerical program is given which is capable of generating the entire dispersion pattern downstream from an instantaneous point or plane source of pollutant. The program is verified by the theory for two-dimensional flow, and gives results in good agreement with laboratory and field experiments.
Both laboratory and field experiments are described. Twenty-one laboratory experiments were conducted: thirteen in two-dimensional flows, over both smooth and roughened bottoms; and eight in three-dimensional flows, formed by adding extreme side roughness to produce lateral velocity variations. Four field experiments were conducted in the Green-Duwamish River, Washington.
Both laboratory and flume experiments prove that in three-dimensional flow the dominant mechanism for dispersion is lateral velocity variation. For instance, in one laboratory experiment the dimensionless dispersion coefficient D/rU* (where r is the hydraulic radius and U* the shear velocity) was increased by a factory of ten by roughening the channel banks. In three-dimensional laboratory flow, D/rU* varied from 190 to 640, a typical range for natural streams. For each experiment, the measured dispersion coefficient agreed with that predicted by the extension of Taylor's analysis within a maximum error of 15%. For the Green-Duwamish River, the average experimentally measured dispersion coefficient was within 5% of the prediction.
Resumo:
The isotopic and elemental abundances of noble gases in the solar system are investigated, using simple mixing models and mass-spectrometric measurements of the noble gases in meteorites and terrestrial rocks and minerals.
Primordial neon is modeled by two isotopically distinct components from the interstellar gas and dust. Neon from the gas dominates solar neon, which contains about ten times more 20Ne than 22Ne. Neon from the dust is represented in meteorites by neon-E, with 20Ne/22Ne less than 0.6. Isotopic variations in meteorites require neon from both dust and gas to be present. Mixing dust and gas without neon loss generates linear correlation lines on three-isotope and composition-concentration diagrams. A model for solar wind implantation predicts small deviations from linear mixing, due to preferential sputtering of the lighter neon isotopes.
Neon in meteorites consists of galactic cosmic ray spallation neon and at least two primordial components, neon-E and neon-S. Neon was measured in several meteorites to investigate these end- members. Cosmogenic neon produced from sodium is found to be strongly enriched in 22Ne. Neon measurements on sodium-rich samples must be interpreted with care so not to confuse this source of 22Ne with neon-E, which is also rich in 22Ne.
Neon data for the carbonaceous chondrite Mokoia show that the end member composition of neon-Si in meteorites is 20Ne/22Ne = 13.7, the same as the present solar wind. The solar wind composition evidently has remained constant since before the compaction of Mokoia.
Ca, Al-rich inclusions from the Allende meteorite were examined for correlation between neon-E and oxygen or magnesium isotopic anomalies. 22Ne and 36Ar enrichments found in some inclusions are attributed to cosmic- ray-induced reactions on Na and Cl, not to a primordial component. Neon-E is not detectably enriched in Allende.
Measurements were made to determine the noble gas contents of various terrestrial rocks and minerals, and to investigate the cycling of noble gases between different terrestrial reservoirs. Beryl crystals contain a characteristic suite of magmatic gases including nucleogenic 21Ne and 22Ne from (α,n) reactions, radiogenic 40Ar, and fissiogenic 131-136Xe from the decay of K and U in the continental crust. Significant concentrations of atmospheric noble gases are also present in beryl.
Both juvenile and atmospheric noble gases are found in rocks from the Skaergaard intrusion. The ratio 40Ar/36Ar (corrected for in situ decay of 40K) correlates with δ18O in plagioclase. Atmospheric argon has been introduced into samples that have experienced oxygen-isotope exchange with circulating meteoric hydrothermal fluids. Unexchanged samples contain juvenile argon with 40Ar/36Ar greater than 6000 that was trapped from the Skaergaard magma.
Juvenile and atmospheric gases have been measured in the glassy rims of mid-ocean ridge (MOR) pillow basalts. Evidence is presented that three samples contain excess radiogenic 129Xe and fission xenon, in addition to the excess radiogenic 40Ar found in all samples. These juvenile gases are being outgassed from the upper-mantle source region of the MOR magma. No isotopic evidence has been found here for juvenile primordial noble gases accompanying the juvenile radiogenic gases in the MOR glasses. Large argon isotopic variations in a single specimen provide a clear indication of the late-stage addition of atmospheric argon, probably from seawater.
The Skaergaard data demonstrate that atmospheric noble gases dissolved in ground water can be transferred into crustal rocks. Subduction of oceanic crust altered by seawater can transport atmospheric noble gases into the upper mantle. A substantial portion of the noble gases in mantle derived rocks may represent subducted gases, not a primordial component as is often assumed.