7 resultados para Hardy, Thomas, 1840-1928 -- Criticism and interpretation
em CaltechTHESIS
Resumo:
An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.
Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).
A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.
The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.
These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.
Resumo:
In four chapters various aspects of earthquake source are studied.
Chapter I
Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.
Chapter II
The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.
Chapter III
Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."
Chapter IV
The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.
Resumo:
This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.
In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.
In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.
In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.
Resumo:
The field of cavity-optomechanics explores the interaction of light with sound in an ever increasing array of devices. This interaction allows the mechanical system to be both sensed and controlled by the optical system, opening up a wide variety of experiments including the cooling of the mechanical resonator to its quantum mechanical ground state and the squeezing of the optical field upon interaction with the mechanical resonator, to name two.
In this work we explore two very different systems with different types of optomechanical coupling. The first system consists of two microdisk optical resonators stacked on top of each other and separated by a very small slot. The interaction of the disks causes their optical resonance frequencies to be extremely sensitive to the gap between the disks. By careful control of the gap between the disks, the optomechanical coupling can be made to be quadratic to first order which is uncommon in optomechanical systems. With this quadratic coupling the light field is now sensitive to the energy of the mechanical resonator and can directly control the potential energy trapping the mechanical motion. This ability to directly control the spring constant without modifying the energy of the mechanical system, unlike in linear optomechanical coupling, is explored.
Next, the bulk of this thesis deals with a high mechanical frequency optomechanical crystal which is used to coherently convert photons between different frequencies. This is accomplished via the engineered linear optomechanical coupling in these devices. Both classical and quantum systems utilize the interaction of light and matter across a wide range of energies. These systems are often not naturally compatible with one another and require a means of converting photons of dissimilar wavelengths to combine and exploit their different strengths. Here we theoretically propose and experimentally demonstrate coherent wavelength conversion of optical photons using photon-phonon translation in a cavity-optomechanical system. For an engineered silicon optomechanical crystal nanocavity supporting a 4 GHz localized phonon mode, optical signals in a 1.5 MHz bandwidth are coherently converted over a 11.2 THz frequency span between one cavity mode at wavelength 1460 nm and a second cavity mode at 1545 nm with a 93% internal (2% external) peak efficiency. The thermal and quantum limiting noise involved in the conversion process is also analyzed and, in terms of an equivalent photon number signal level, are found to correspond to an internal noise level of only 6 and 4 times 10x^-3 quanta, respectively.
We begin by developing the requisite theoretical background to describe the system. A significant amount of time is then spent describing the fabrication of these silicon nanobeams, with an emphasis on understanding the specifics and motivation. The experimental demonstration of wavelength conversion is then described and analyzed. It is determined that the method of getting photons into the cavity and collected from the cavity is a fundamental limiting factor in the overall efficiency. Finally, a new coupling scheme is designed, fabricated, and tested that provides a means of coupling greater than 90% of photons into and out of the cavity, addressing one of the largest obstacles with the initial wavelength conversion experiment.
Resumo:
Theoretical and experimental studies of a gas laser amplifier are presented, assuming the amplifier is operating with a saturating optical frequency signal. The analysis is primarily concerned with the effects of the gas pressure and the presence of an axial magnetic field on the characteristics of the amplifying medium. Semiclassical radiation theory is used, along with a density matrix description of the atomic medium which relates the motion of single atoms to the macroscopic observables. A two-level description of the atom, using phenomenological source rates and decay rates, forms the basis of our analysis of the gas laser medium. Pressure effects are taken into account to a large extent through suitable choices of decay rate parameters.
Two methods for calculating the induced polarization of the atomic medium are used. The first method utilizes a perturbation expansion which is valid for signal intensities which barely reach saturation strength, and it is quite general in applicability. The second method is valid for arbitrarily strong signals, but it yields tractable solutions only for zero magnetic field or for axial magnetic fields large enough such that the Zeeman splitting is much larger than the power broadened homogeneous linewidth of the laser transition. The effects of pressure broadening of the homogeneous spectral linewidth are included in both the weak-signal and strong-signal theories; however the effects of Zeeman sublevel-mixing collisions are taken into account only in the weak-signal theory.
The behavior of a He-Ne gas laser amplifier in the presence of an axial magnetic field has been studied experimentally by measuring gain and Faraday rotation of linearly polarized resonant laser signals for various values of input signal intensity, and by measuring nonlinearity - induced anisotropy for elliptically polarized resonant laser signals of various input intensities. Two high-gain transitions in the 3.39-μ region were used for study: a J = 1 to J = 2 (3s2 → 3p4) transition and a J = 1 to J = 1 (3s2 → 3p2) transition. The input signals were tuned to the centers of their respective resonant gain lines.
The experimental results agree quite well with corresponding theoretical expressions which have been developed to include the nonlinear effects of saturation strength signals. The experimental results clearly show saturation of Faraday rotation, and for the J = 1 t o J = 1 transition a Faraday rotation reversal and a traveling wave gain dip are seen for small values of axial magnetic field. The nonlinearity induced anisotropy shows a marked dependence on the gas pressure in the amplifier tube for the J = 1 to J = 2 transition; this dependence agrees with the predictions of the general perturbational or weak signal theory when allowances are made for the effects of Zeeman sublevel-mixing collisions. The results provide a method for measuring the upper (neon 3s2) level quadrupole moment decay rate, the dipole moment decay rates for the 3s2 → 3p4 and 3s2 → 3p2 transitions, and the effects of various types of collision processes on these decay rates.
Resumo:
The cytolytic interaction of Polyoma virus with mouse embryo cells has been studied by radiobiological methods known to distinguish temperate from virulent bacteriophage. No evidence for "temperate" properties of Polyoma was found. During the course of these studies, it was observed that the curve of inactivation of Polyoma virus by ultraviolet light had two components - a more sensitive one at low doses, and a less sensitive one at higher doses. Virus which survives a low dose has an eclipse period similar to that of unirradiated virus, while virus surviving higher doses shows a significantly longer eclipse period. If Puromycin is present during the early part of the eclipse period, the survival curve becomes a single exponential with the sensitivity of the less sensitive component. These results suggest a repair mechanism in mouse cells which operates more effectively if virus development is delayed.
A comparison of the rates of inactivation of the cytolytic and transforming abilities of Polyoma by ultraviolet light, X-rays, nitrous acid treatment, or the decay of incorporated P32, showed that the transforming ability has a target size roughly 60% of that of the plaque-forming ability. It is thus concluded that only a fraction of the viral genes are necessary for causing transformation.
The appearance of virus-specific RNA in productively infected mouse kidney cells has been followed by means of hybridization between pulse-labelled RNA from the infected cells and the purified virus DNA. The results show a sharp increase in the amount of virus-specific RNA around the time of virus DNA synthesis. The presence of a small amount of virus-specific RNA in virus-free transformed cells has also been shown. This result offers strong evidence for the persistence of at least part of the viral genome in transformed cells.
Resumo:
Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.
This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.
A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.
This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.