22 resultados para Laboratory techniques

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The determination of the energy levels and the probabilities of transition between them, by the formal analysis of observed electronic, vibrational, and rotational band structures, forms the direct goal of all investigations of molecular spectra, but the significance of such data lies in the possibility of relating them theoretically to more concrete properties of molecules and the radiation field. From the well developed electronic spectra of diatomic molecules, it has been possible, with the aid of the non-relativistic quantum mechanics, to obtain accurate moments of inertia, molecular potential functions, electronic structures, and detailed information concerning the coupling of spin and orbital angular monenta with the angular momentum of nuclear rotation. The silicon fluori1e molecule has been investigated in this laboratory, and is found to emit bands whose vibrational and rotational structures can be analyzed in this detailed fashion.

Like silicon fluoride, however, the great majority of diatomic molecules are formed only under the unusual conditions of electrical discharge, or in high temperature furnaces, so that although their spectra are of great theoretical interest, the chemist is eager to proceed to a study of polyatomic molecules, in the hope that their more practically interesting structures might also be determined with the accuracy and assurance which characterize the spectroscopic determinations of the constants of diatomic molecules. Some progress has been made in the determination of molecule potential functions from the vibrational term values deduced from Raman and infrared spectra, but in no case can the calculations be carried out with great generality, since the number of known term values is always small compared with the total number of potential constants in even so restricted a potential function as the simple quadratic type. For the determination of nuclear configurations and bond distances, however, a knowledge of the rotational terms is required. The spectra of about twelve of the simpler polyatomic molecules have been subjected to rotational analyses, and a number of bond distances are known with considerable accuracy, yet the number of molecules whose rotational fine structure has been resolved even with the most powerful instruments is small. Consequently, it was felt desirable to investigate the spectra of a number of other promising polyatomic molecules, with the purpose of carrying out complete rotational analyses of all resolvable bands, and ascertaining the value of the unresolved band envelopes in determining the structures of such molecules, in the cases in which resolution is no longer possible. Although many of the compounds investigated absorbed too feebly to be photographed under high dispersion with the present infrared sensitizations, the location and relative intensities of their bands, determined by low dispersion measurements, will be reported in the hope that these compounds may be reinvestigated in the future with improved techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.

During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.

Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.

The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.

Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.

The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metallic glasses have typically been treated as a “one size fits all” type of material. Every alloy is considered to have high strength, high hardness, large elastic limits, corrosion resistance, etc. However, similar to traditional crystalline materials, properties are strongly dependent upon the constituent elements, how it was processed, and the conditions under which it will be used. An important distinction which can be made is between metallic glasses and their composites. Charpy impact toughness measurements are performed to determine the effect processing and microstructure have on bulk metallic glass matrix composites (BMGMCs). Samples are suction cast, machined from commercial plates, and semi-solidly forged (SSF). The SSF specimens have been found to have the highest impact toughness due to the coarsening of the dendrites, which occurs during the semi-solid processing stages. Ductile to brittle transition (DTBT) temperatures are measured for a BMGMC. While at room temperature the BMGMC is highly toughened compared to a fully glassy alloy, it undergoes a DTBT by 250 K. At this point, its impact toughness mirrors that of the constituent glassy matrix. In the following chapter, BMGMCs are shown to have the capability of being capacitively welded to form single, monolithic structures. Shear measurements are performed across welded samples, and, at sufficient weld energies, are found to retain the strength of the parent alloy. Cross-sections are inspected via SEM and no visible crystallization of the matrix occurs.

Next, metallic glasses and BMGMCs are formed into sheets and eggbox structures are tested in hypervelocity impacts. Metallic glasses are ideal candidates for protection against micrometeorite orbital debris due to their high hardness and relatively low density. A flat single layer, flat BMG is compared to a BMGMC eggbox and the latter creates a more diffuse projectile cloud after penetration. A three tiered eggbox structure is also tested by firing a 3.17 mm aluminum sphere at 2.7 km/s at it. The projectile penetrates the first two layers, but is successfully contained by the third.

A large series of metallic glass alloys are created and their wear loss is measured in a pin on disk test. Wear is found to vary dramatically among different metallic glasses, with some considerably outperforming the current state-of-the-art crystalline material (most notably Cu₄₃Zr₄₃Al₇Be₇). Others, on the other hand, suffered extensive wear loss. Commercially available Vitreloy 1 lost nearly three times as much mass in wear as alloy prepared in a laboratory setting. No conclusive correlations can be found between any set of mechanical properties (hardness, density, elastic, bulk, or shear modulus, Poisson’s ratio, frictional force, and run in time) and wear loss. Heat treatments are performed on Vitreloy 1 and Cu₄₃Zr₄₃Al₇Be₇. Anneals near the glass transition temperature are found to increase hardness slightly, but decrease wear loss significantly. Crystallization of both alloys leads to dramatic increases in wear resistance. Finally, wear tests under vacuum are performed on the two alloys above. Vitreloy 1 experiences a dramatic decrease in wear loss, while Cu₄₃Zr₄₃Al₇Be₇ has a moderate increase. Meanwhile, gears are fabricated through three techniques: electrical discharge machining of 1 cm by 3 mm cylinders, semisolid forging, and copper mold suction casting. Initial testing finds the pin on disk test to be an accurate predictor of wear performance in gears.

The final chapter explores an exciting technique in the field of additive manufacturing. Laser engineered net shaping (LENS) is a method whereby small amounts of metallic powders are melted by a laser such that shapes and designs can be built layer by layer into a final part. The technique is extended to mixing different powders during melting, so that compositional gradients can be created across a manufactured part. Two compositional gradients are fabricated and characterized. Ti 6Al¬ 4V to pure vanadium was chosen for its combination of high strength and light weight on one end, and high melting point on the other. It was inspected by cross-sectional x-ray diffraction, and only the anticipated phases were present. 304L stainless steel to Invar 36 was created in both pillar and as a radial gradient. It combines strength and weldability along with a zero coefficient of thermal expansion material. Only the austenite phase is found to be present via x-ray diffraction. Coefficient of thermal expansion is measured for four compositions, and it is found to be tunable depending on composition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of the strength of a material is relevant to a variety of applications including automobile collisions, armor penetration and inertial confinement fusion. Although dynamic behavior of materials at high pressures and strain-rates has been studied extensively using plate impact experiments, the results provide measurements in one direction only. Material behavior that is dependent on strength is unaccounted for. The research in this study proposes two novel configurations to mitigate this problem.

The first configuration introduced is the oblique wedge experiment, which is comprised of a driver material, an angled target of interest and a backing material used to measure in-situ velocities. Upon impact, a shock wave is generated in the driver material. As the shock encounters the angled target, it is reflected back into the driver and transmitted into the target. Due to the angle of obliquity of the incident wave, a transverse wave is generated that allows the target to be subjected to shear while being compressed by the initial longitudinal shock such that the material does not slip. Using numerical simulations, this study shows that a variety of oblique wedge configurations can be used to study the shear response of materials and this can be extended to strength measurement as well. Experiments were performed on an oblique wedge setup with a copper impactor, polymethylmethacrylate driver, aluminum 6061-t6 target, and a lithium fluoride window. Particle velocities were measured using laser interferometry and results agree well with the simulations.

The second novel configuration is the y-cut quartz sandwich design, which uses the anisotropic properties of y-cut quartz to generate a shear wave that is transmitted into a thin sample. By using an anvil material to back the thin sample, particle velocities measured at the rear surface of the backing plate can be implemented to calculate the shear stress in the material and subsequently the strength. Numerical simulations were conducted to show that this configuration has the ability to measure the strength for a variety of materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental studies were conducted with the goals of 1) determining the origin of Pt- group element (PGE) alloys and associated mineral assemblages in refractory inclusions from meteorites and 2) developing a new ultrasensitive method for the in situ chemical and isotopic analysis of PGE. A general review of the geochemistry and cosmochemistry of the PGE is given, and specific research contributions are presented within the context of this broad framework.

An important step toward understanding the cosmochemistry of the PGE is the determination of the origin of POE-rich metallic phases (most commonly εRu-Fe) that are found in Ca, AJ-rich refractory inclusions (CAI) in C3V meteorites. These metals occur along with γNi-Fe metals, Ni-Fe sulfides and Fe oxides in multiphase opaque assemblages. Laboratory experiments were used to show that the mineral assemblages and textures observed in opaque assemblages could be produced by sulfidation and oxidation of once homogeneous Ni-Fe-PGE metals. Phase equilibria, partitioning and diffusion kinetics were studied in the Ni-Fe-Ru system in order to quantify the conditions of opaque assemblage formation. Phase boundaries and tie lines in the Ni-Fe-Ru system were determined at 1273, 1073 and 873K using an experimental technique that allowed the investigation of a large portion of the Ni-Fe-Ru system with a single experiment at each temperature by establishing a concentration gradient within which local equilibrium between coexisting phases was maintained. A wide miscibility gap was found to be present at each temperature, separating a hexagonal close-packed εRu-Fe phase from a face-centered cubic γNi-Fe phase. Phase equilibria determined here for the Ni-Fe-Ru system, and phase equilibria from the literature for the Ni-Fe-S and Ni-Fe-O systems, were compared with analyses of minerals from opaque assemblages to estimate the temperature and chemical conditions of opaque assemblage formation. It was determined that opaque assemblages equilibrated at a temperature of ~770K, a sulfur fugacity 10 times higher than an equilibrium solar gas, and an oxygen fugacity 106 times higher than an equilibrium solar gas.

Diffusion rates between -γNi-Fe and εRu-Fe metal play a critical role in determining the time (with respect to CAI petrogenesis) and duration of the opaque assemblage equilibration process. The diffusion coefficient for Ru in Ni (DRuNi) was determined as an analog for the Ni-Fe-Ru system by the thin-film diffusion method in the temperature range of 1073 to 1673K and is given by the expression:

DRuNi (cm2 sec-1) = 5.0(±0.7) x 10-3 exp(-2.3(±0.1) x 1012 erg mole-1/RT) where R is the gas constant and T is the temperature in K. Based on the rates of dissolution and exsolution of metallic phases in the Ni-Fe-Ru system it is suggested that opaque assemblages equilibrated after the melting and crystallization of host CAI during a metamorphic event of ≥ 103 years duration. It is inferred that opaque assemblages originated as immiscible metallic liquid droplets in the CAI silicate liquid. The bulk compositions of PGE in these precursor alloys reflects an early stage of condensation from the solar nebula and the partitioning of V between the precursor alloys and CAI silicate liquid reflects the reducing nebular conditions under which CAI were melted. The individual mineral phases now observed in opaque assemblages do not preserve an independent history prior to CAI melting and crystallization, but instead provide important information on the post-accretionary history of C3V meteorites and allow the quantification of the temperature, sulfur fugacity and oxygen fugacity of cooling planetary environments. This contrasts with previous models that called upon the formation of opaque assemblages by aggregation of phases that formed independently under highly variable conditions in the solar nebula prior to the crystallization of CAI.

Analytical studies were carried out on PGE-rich phases from meteorites and the products of synthetic experiments using traditional electron microprobe x-ray analytical techniques. The concentrations of PGE in common minerals from meteorites and terrestrial rocks are far below the ~100 ppm detection limit of the electron microprobe. This has limited the scope of analytical studies to the very few cases where PGE are unusually enriched. To study the distribution of PGE in common minerals will require an in situ analytical technique with much lower detection limits than any methods currently in use. To overcome this limitation, resonance ionization of sputtered atoms was investigated for use as an ultrasensitive in situ analytical technique for the analysis of PGE. The mass spectrometric analysis of Os and Re was investigated using a pulsed primary Ar+ ion beam to provide sputtered atoms for resonance ionization mass spectrometry. An ionization scheme for Os that utilizes three resonant energy levels (including an autoionizing energy level) was investigated and found to have superior sensitivity and selectivity compared to nonresonant and one and two energy level resonant ionization schemes. An elemental selectivity for Os over Re of ≥ 103 was demonstrated. It was found that detuning the ionizing laser from the autoionizing energy level to an arbitrary region in the ionization continuum resulted in a five-fold decrease in signal intensity and a ten-fold decrease in elemental selectivity. Osmium concentrations in synthetic metals and iron meteorites were measured to demonstrate the analytical capabilities of the technique. A linear correlation between Os+ signal intensity and the known Os concentration was observed over a range of nearly 104 in Os concentration with an accuracy of ~ ±10%, a millimum detection limit of 7 parts per billion atomic, and a useful yield of 1%. Resonance ionization of sputtered atoms samples the dominant neutral-fraction of sputtered atoms and utilizes multiphoton resonance ionization to achieve high sensitivity and to eliminate atomic and molecular interferences. Matrix effects should be small compared to secondary ion mass spectrometry because ionization occurs in the gas phase and is largely independent of the physical properties of the matrix material. Resonance ionization of sputtered atoms can be applied to in situ chemical analysis of most high ionization potential elements (including all of the PGE) in a wide range of natural and synthetic materials. The high useful yield and elemental selectivity of this method should eventually allow the in situ measurement of Os isotope ratios in some natural samples and in sample extracts enriched in PGE by fire assay fusion.

Phase equilibria and diffusion experiments have provided the basis for a reinterpretation of the origin of opaque assemblages in CAI and have yielded quantitative information on conditions in the primitive solar nebula and cooling planetary environments. Development of the method of resonance ionization of sputtered atoms for the analysis of Os has shown that this technique has wide applications in geochemistry and will for the first time allow in situ studies of the distribution of PGE at the low concentration levels at which they occur in common minerals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. The effect of 2,2’-bis-[α-(trimethylammonium)methyl]azobenzene (2BQ), a photoisomerizable competitive antagonist, was studied at the nicotinic acetycholine receptor of Electrophorus electroplaques using voltage-jump and light-flash techniques.

2. 2BQ, at concentrations below 3 μΜ, reduced the amplitude of voltage-jump relaxations but had little effect on the voltage-jump relaxation time constants under all experimental conditions. At higher concentrations and voltages more negative than -150 mV, 2BQ caused significant open channel blockade.

3. Dose-ratio studies showed that the cis and trans isomers of 2BQ have equilibrium binding constants (K) of .33 and 1.0 μΜ, respectively. The binding constants determined for both isomers are independent of temperature, voltage, agonist concentration, and the nature of the agonist.

4. In a solution of predominantly cis-2BQ, visible-light flashes led to a net cis→trans isomerization and caused an increase in the agonist-induced current. This increase had at least two exponential components; the larger amplitude component had the same time constant as a subsequent voltage-jump relaxation; the smaller amplitude component was investigated using ultraviolet light flashes.

5. In a solution of predominantly trans-2BQ, UV-light flashes led to a net trans→cis isomerization and caused a net decrease in the agonist-induced current. This effect had at least two exponential components. The smaller and faster component was an increase in agonist-induced current and had a similar time constant to the voltage-jump relaxation. The larger component was a slow decrease in the agonist-induced current with rate constant approximately an order of magnitude less than that of the voltage-jump relaxation. This slow component provided a measure of the rate constant for dissociation of cis-2BQ (k_ = 60/s at 20°C). Simple modelling of the slope of the dose-rate curves yields an association rate constant of 1.6 x 108/M/s. This agrees with the association rate constant of 1.8 x 108/M/s estimated from the binding constant (Ki). The Q10 of the dissociation rate constant of cis-2BQ was 3.3 between 6° and 20°C. The rate constants for association and dissociation of cis-28Q at receptors are independent of voltage, agonist concentration, and the nature of the agonist.

6. We have measured the molecular rate constants of a competitive antagonist which has roughly the same K as d-tubocurarine but interacts more slowly with the receptor. This leads to the conclusion that curare itself has an association rate constant of 4 x 109/M/s or roughly as fast as possible for an encounter-limited reaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our understanding of the processes and mechanisms by which secondary organic aerosol (SOA) is formed is derived from laboratory chamber studies. In the atmosphere, SOA formation is primarily driven by progressive photooxidation of SOA precursors, coupled with their gas-particle partitioning. In the chamber environment, SOA-forming vapors undergo multiple chemical and physical processes that involve production and removal via gas-phase reactions; partitioning onto suspended particles vs. particles deposited on the chamber wall; and direct deposition on the chamber wall. The main focus of this dissertation is to characterize the interactions of organic vapors with suspended particles and the chamber wall and explore how these intertwined processes in laboratory chambers govern SOA formation and evolution.

A Functional Group Oxidation Model (FGOM) that represents SOA formation and evolution in terms of the competition between functionalization and fragmentation, the extent of oxygen atom addition, and the change of volatility, is developed. The FGOM contains a set of parameters that are to be determined by fitting of the model to laboratory chamber data. The sensitivity of the model prediction to variation of the adjustable parameters allows one to assess the relative importance of various pathways involved in SOA formation.

A critical aspect of the environmental chamber is the presence of the wall, which can induce deposition of SOA-forming vapors and promote heterogeneous reactions. An experimental protocol and model framework are first developed to constrain the vapor-wall interactions. By optimal fitting the model predictions to the observed wall-induced decay profiles of 25 oxidized organic compounds, the dominant parameter governing the extent of wall deposition of a compound is identified, i.e., wall accommodation coefficient. By correlating this parameter with the molecular properties of a compound via its volatility, the wall-induced deposition rate of an organic compound can be predicted based on its carbon and oxygen numbers in the molecule.

Heterogeneous transformation of δ-hydroxycarbonyl, a major first-generation product from long-chain alkane photochemistry, is observed on the surface of particles and walls. The uniqueness of this reaction scheme is the production of substituted dihydrofuran, which is highly reactive towards ozone, OH, and NO3, thereby opening a reaction pathway that is not usually accessible to alkanes. A spectrum of highly-oxygenated products with carboxylic acid, ester, and ether functional groups is produced from the substituted dihydrofuran chemistry, thereby affecting the average oxidation state of the alkane-derived SOA.

The vapor wall loss correction is applied to several chamber-derived SOA systems generated from both anthropogenic and biogenic sources. Experimental and modeling approaches are employed to constrain the partitioning behavior of SOA-forming vapors onto suspended particles vs. chamber walls. It is demonstrated that deposition of SOA-forming vapors to the chamber wall during photooxidation experiments can lead to substantial and systematic underestimation of SOA. Therefore, it is likely that a lack of proper accounting for vapor wall losses that suppress chamber-derived SOA yields contribute substantially to the underprediction of ambient SOA concentrations in atmospheric models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stable isotope geochemistry is a valuable toolkit for addressing a broad range of problems in the geosciences. Recent technical advances provide information that was previously unattainable or provide unprecedented precision and accuracy. Two such techniques are site-specific stable isotope mass spectrometry and clumped isotope thermometry. In this thesis, I use site-specific isotope and clumped isotope data to explore natural gas development and carbonate reaction kinetics. In the first chapter, I develop an equilibrium thermodynamics model to calculate equilibrium constants for isotope exchange reactions in small organic molecules. This equilibrium data provides a framework for interpreting the more complex data in the later chapters. In the second chapter, I demonstrate a method for measuring site-specific carbon isotopes in propane using high-resolution gas source mass spectrometry. This method relies on the characteristic fragments created during electron ionization, in which I measure the relative isotopic enrichment of separate parts of the molecule. My technique will be applied to a range of organic compounds in the future. For the third chapter, I use this technique to explore diffusion, mixing, and other natural processes in natural gas basins. As time progresses and the mixture matures, different components like kerogen and oil contribute to the propane in a natural gas sample. Each component imparts a distinct fingerprint on the site-specific isotope distribution within propane that I can observe to understand the source composition and maturation of the basin. Finally, in Chapter Four, I study the reaction kinetics of clumped isotopes in aragonite. Despite its frequent use as a clumped isotope thermometer, the aragonite blocking temperature is not known. Using laboratory heating experiments, I determine that the aragonite clumped isotope thermometer has a blocking temperature of 50-100°C. I compare this result to natural samples from the San Juan Islands that exhibit a maximum clumped isotope temperature that matches this blocking temperature. This thesis presents a framework for measuring site-specific carbon isotopes in organic molecules and new constraints on aragonite reaction kinetics. This study represents the foundation of a future generation of geochemical tools for the study of complex geologic systems.