13 resultados para Bombing and gunnery ranges.

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part I perform Hartree-Fock calculations to show that quantum dots (i.e., two-dimensional systems of up to twenty interacting electrons in an external parabolic potential) undergo a gradual transition to a spin-polarized Wigner crystal with increasing magnetic field strength. The phase diagram and ground state energies have been determined. I tried to improve the ground state of the Wigner crystal by introducing a Jastrow ansatz for the wave function and performing a variational Monte Carlo calculation. The existence of so called magic numbers was also investigated. Finally, I also calculated the heat capacity associated with the rotational degree of freedom of deformed many-body states and suggest an experimental method to detect Wigner crystals.

The second part of the thesis investigates infinite nuclear matter on a cubic lattice. The exact thermal formalism describes nucleons with a Hamiltonian that accommodates on-site and next-neighbor parts of the central, spin-exchange and isospin-exchange interaction. Using auxiliary field Monte Carlo methods, I show that energy and basic saturation properties of nuclear matter can be reproduced. A first order phase transition from an uncorrelated Fermi gas to a clustered system is observed by computing mechanical and thermodynamical quantities such as compressibility, heat capacity, entropy and grand potential. The structure of the clusters is investigated with the help two-body correlations. I compare symmetry energy and first sound velocities with literature and find reasonable agreement. I also calculate the energy of pure neutron matter and search for a similar phase transition, but the survey is restricted by the infamous Monte Carlo sign problem. Also, a regularization scheme to extract potential parameters from scattering lengths and effective ranges is investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.

Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.

The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.

The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.

In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.

Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.

The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.

The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.

Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.

Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.

An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.

As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis I apply paleomagnetic techniques to paleoseismological problems. I investigate the use of secular-variation magnetostratigraphy to date prehistoric earthquakes; I identify liquefaction remanent magnetization (LRM), and I quantify coseismic deformation within a fault zone by measuring the rotation of paleomagnetic vectors.

In Chapter 2 I construct a secular-variation reference curve for southern California. For this curve I measure three new well-constrained paleomagnetic directions: two from the Pallett Creek paleoseismological site at A.D. 1397-1480 and A.D. 1465-1495, and one from Panum Crater at A.D. 1325-1365. To these three directions I add the best nine data points from the Sternberg secular-variation curve, five data points from Champion, and one point from the A.D. 1480 eruption of Mt. St. Helens. I derive the error due to the non-dipole field that is added to these data by the geographical correction to southern California. Combining these yields a secular variation curve for southern California covering the period A.D. 670 to 1910, with the best coverage in the range A.D. 1064 to 1505.

In Chapter 3 I apply this curve to a problem in southern California. Two paleoseismological sites in the Salton trough of southern California have sediments deposited by prehistoric Lake Cahuilla. At the Salt Creek site I sampled sediments from three different lakes, and at the Indio site I sampled sediments from four different lakes. Based upon the coinciding paleomagnetic directions I correlate the oldest lake sampled at Salt Creek with the oldest lake sampled at Indio. Furthermore, the penultimate lake at Indio does not appear to be present at Salt Creek. Using the secular variation curve I can assign the lakes at Salt Creek to broad age ranges of A.D. 800 to 1100, A.D. 1100 to 1300, and A.D. 1300 to 1500. This example demonstrates the large uncertainties in the secular variation curve and the need to construct curves from a limited geographical area.

Chapter 4 demonstrates that seismically induced liquefaction can cause resetting of detrital remanent magnetization and acquisition of a liquefaction remanent magnetization (LRM). I sampled three different liquefaction features, a sandbody formed in the Elsinore fault zone, diapirs from sediments of Mono Lake, and a sandblow in these same sediments. In every case the liquefaction features showed stable magnetization despite substantial physical disruption. In addition, in the case of the sandblow and the sandbody, the intensity of the natural remanent magnetization increased by up to an order of magnitude.

In Chapter 5 I apply paleomagnetics to measuring the tectonic rotations in a 52 meter long transect across the San Andreas fault zone at the Pallett Creek paleoseismological site. This site has presented a significant problem because the brittle long-term average slip-rate across the fault is significantly less than the slip-rate from other nearby sites. I find sections adjacent to the fault with tectonic rotations of up to 30°. If interpreted as block rotations, the non-brittle offset was 14.0+2.8, -2.1 meters in the last three earthquakes and 8.5+1.0, -0.9 meters in the last two. Combined with the brittle offset in these events, the last three events all had about 6 meters of total fault offset, even though the intervals between them were markedly different.

In Appendix 1 I present a detailed description of my standard sampling and demagnetization procedure.

In Appendix 2 I present a detailed discussion of the study at Panum Crater that yielded the well-constrained paleomagnetic direction for use in developing secular variation curve in Chapter 2. In addition, from sampling two distinctly different clast types in a block-and-ash flow deposit from Panum Crater, I find that this flow had a complex emplacement and cooling history. Angular, glassy "lithic" blocks were emplaced at temperatures above 600° C. Some of these had cooled nearly completely, whereas others had cooled only to 450° C, when settling in the flow rotated the blocks slightly. The partially cooled blocks then finished cooling without further settling. Highly vesicular, breadcrusted pumiceous clasts had not yet cooled to 600° C at the time of these rotations, because they show a stable, well clustered, unidirectional magnetic vector.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secondary-ion mass spectrometry (SIMS), electron probe analysis (EPMA), analytical scanning electron microscopy (SEM) and infrared (IR) spectroscopy were used to determine the chemical composition and the mineralogy of sub-micrometer inclusions in cubic diamonds and in overgrowths (coats) on octahedral diamonds from Zaire, Botswana, and some unknown localities.

The inclusions are sub-micrometer in size. The typical diameter encountered during transmission electron microscope (TEM) examination was 0.1-0.5 µm. The micro-inclusions are sub-rounded and their shape is crystallographically controlled by the diamond. Normally they are not associated with cracks or dislocations and appear to be well isolated within the diamond matrix. The number density of inclusions is highly variable on any scale and may reach 10^(11) inclusions/cm^3 in the most densely populated zones. The total concentration of metal oxides in the diamonds varies between 20 and 1270 ppm (by weight).

SIMS analysis yields the average composition of about 100 inclusions contained in the sputtered volume. Comparison of analyses of different volumes of an individual diamond show roughly uniform composition (typically ±10% relative). The variation among the average compositions of different diamonds is somewhat greater (typically ±30%). Nevertheless, all diamonds exhibit similar characteristics, being rich in water, carbonate, SiO_2, and K_2O, and depleted in MgO. The composition of micro-inclusions in most diamonds vary within the following ranges: SiO_2, 30-53%; K_2O, 12-30%; CaO, 8-19%; FeO, 6-11%; Al_2O_3, 3-6%; MgO, 2-6%; TiO_2, 2-4%; Na_2O, 1-5%; P_2O_5, 1-4%; and Cl, 1-3%. In addition, BaO, 1-4%; SrO, 0.7-1.5%; La_2O_3, 0.1-0.3%; Ce_2O_3, 0.3-0.5%; smaller amounts of other rare-earth elements (REE), as well as Mn, Th, and U were also detected by instrumental neutron activation analysis (INAA). Mg/(Fe+Mg), 0.40-0.62 is low compared with other mantle derived phases; K/ AI ratios of 2-7 are very high, and the chondrite-normalized Ce/Eu ratios of 10-21 are also high, indicating extremely fractionated REE patterns.

SEM analyses indicate that individual inclusions within a single diamond are roughly of similar composition. The average composition of individual inclusions as measured with the SEM is similar to that measured by SIMS. Compositional variations revealed by the SEM are larger than those detected by SIMS and indicate a small variability in the composition of individual inclusions. No compositions of individual inclusions were determined that might correspond to mono-mineralic inclusions.

IR spectra of inclusion- bearing zones exhibit characteristic absorption due to: (1) pure diamonds, (2) nitrogen and hydrogen in the diamond matrix; and (3) mineral phases in the micro-inclusions. Nitrogen concentrations of 500-1100 ppm, typical of the micro-inclusion-bearing zones, are higher than the average nitrogen content of diamonds. Only type IaA centers were detected by IR. A yellow coloration may indicate small concentration of type IB centers.

The absorption due to the micro-inclusions in all diamonds produces similar spectra and indicates the presence of hydrated sheet silicates (most likely, Fe-rich clay minerals), carbonates (most likely calcite), and apatite. Small quantities of molecular CO_2 are also present in most diamonds. Water is probably associated with the silicates but the possibility of its presence as a fluid phase cannot be excluded. Characteristic lines of olivine, pyroxene and garnet were not detected and these phases cannot be significant components of the inclusions. Preliminary quantification of the IR data suggests that water and carbonate account for, on average, 20-40 wt% of the micro-inclusions.

The composition and mineralogy of the micro-inclusions are completely different from those of the more common, larger inclusions of the peridotitic or eclogitic assemblages. Their bulk composition resembles that of potassic magmas, such as kimberlites and lamproites, but is enriched in H_2O, CO_3, K_2O, and incompatible elements, and depleted in MgO.

It is suggested that the composition of the micro-inclusions represents a volatile-rich fluid or a melt trapped by the diamond during its growth. The high content of K, Na, P, and incompatible elements suggests that the trapped material found in the micro-inclusions may represent an effective metasomatizing agent. It may also be possible that fluids of similar composition are responsible for the extreme enrichment of incompatible elements documented in garnet and pyroxene inclusions in diamonds.

The origin of the fluid trapped in the micro-inclusions is still uncertain. It may have been formed by incipient melting of a highly metasomatized mantle rocks. More likely, it is the result of fractional crystallization of a potassic parental magma at depth. In either case, the micro-inclusions document the presence of highly potassic fluids or melts at depths corresponding to the diamond stability field in the upper mantle. The phases presently identified in the inclusions are believed to be the result of closed system reactions at lower pressures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From studies of protoplanetary disks to extrasolar planets and planetary debris, we aim to understand the full evolution of a planetary system. Observational constraints from ground- and space-based instrumentation allows us to measure the properties of objects near and far and are central to developing this understanding. We present here three observational campaigns that, when combined with theoretical models, reveal characteristics of different stages and remnants of planet formation. The Kuiper Belt provides evidence of chemical and dynamical activity that reveals clues to its primordial environment and subsequent evolution. Large samples of this population can only be assembled at optical wavelengths, with thermal measurements at infrared and sub-mm wavelengths currently available for only the largest and closest bodies. We measure the size and shape of one particular object precisely here, in hopes of better understanding its unique dynamical history and layered composition.

Molecular organic chemistry is one of the most fundamental and widespread facets of the universe, and plays a key role in planet formation. A host of carbon-containing molecules vibrationally emit in the near-infrared when excited by warm gas, T~1000 K. The NIRSPEC instrument at the W.M. Keck Observatory is uniquely configured to study large ranges of this wavelength region at high spectral resolution. Using this facility we present studies of warm CO gas in protoplanetary disks, with a new code for precise excitation modeling. A parameterized suite of models demonstrates the abilities of the code and matches observational constraints such as line strength and shape. We use the models to probe various disk parameters as well, which are easily extensible to others with known disk emission spectra such as water, carbon dioxide, acetylene, and hydrogen cyanide.

Lastly, the existence of molecules in extrasolar planets can also be studied with NIRSPEC and reveals a great deal about the evolution of the protoplanetary gas. The species we observe in protoplanetary disks are also often present in exoplanet atmospheres, and are abundant in Earth's atmosphere as well. Thus, a sophisticated telluric removal code is necessary to analyze these high dynamic range, high-resolution spectra. We present observations of a hot Jupiter, revealing water in its atmosphere and demonstrating a new technique for exoplanet mass determination and atmospheric characterization. We will also be applying this atmospheric removal code to the aforementioned disk observations, to improve our data analysis and probe less abundant species. Guiding models using observations is the only way to develop an accurate understanding of the timescales and processes involved. The futures of the modeling and of the observations are bright, and the end goal of realizing a unified model of planet formation will require both theory and data, from a diverse collection of sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence upon the basic viscous flow about two axisymmetric bodies of (i) freestream turbulence level and (ii) the injection of small amounts of a drag-reducing polymer (Polyox WSR 301) into the test model boundary layer was investigated by the schlieren flow visualization technique. The changes in the type and occurrence of cavitation inception caused by the subsequent modifications in the viscous flow were studied. A nuclei counter using the holographic technique was built to monitor freestream nuclei populations and a few preliminary tests investigating the consequences of different populations on cavitation inception were carried out.

Both test models were observed to have a laminar separation over their respective test Reynolds number ranges. The separation on one test model was found to be insensitive to freestream turbulence levels of up to 3.75 percent. The second model was found to be very susceptible having its critical velocity reduced from 30 feet per second at a 0.04 percent turbulence level to 10 feet per second at a 3.75 percent turbulence level. Cavitation tests on both models at the lowest turbulence level showed the value of the incipient cavitation number and the type of cavitation were controlled by the presence of the laminar separation. Cavitation tests on the second model at 0.65 percent turbulence level showed no change in the inception index, but the appearance of the developed cavitation was altered.

The presence of Polyox in the boundary layer resulted in a cavitation suppression comparable to that found by other investigators. The elimination of the normally occurring laminar separation on these bodies by a polymer-induced instability in the laminar boundary layer was found to be responsible for the suppression of inception.

Freestream nuclei populations at test conditions were measured and it was found that if there were many freestream gas bubbles the normally present laminar separation was elminated and travelling bubble type cavitation occurred - the value of the inception index then depended upon the nuclei population. In cases where the laminar separation was present it was found that the value of the inception index was insensitive to the free stream nuclei populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-classical properties and quantum interference (QI) in two-photon excitation of a three level atom (|1〉), |2〉, |3〉) in a ladder configuration, illuminated by multiple fields in non-classical (squeezed) and/or classical (coherent) states, is studied. Fundamentally new effects associated with quantum correlations in the squeezed fields and QI due to multiple excitation pathways have been observed. Theoretical studies and extrapolations of these findings have revealed possible applications which are far beyond any current capabilities, including ultrafast nonlinear mixing, ultrafast homodyne detection and frequency metrology. The atom used throughout the experiments was Cesium, which was magneto-optically trapped in a vapor cell to produce a Doppler-free sample. For the first part of the work the |1〉 → |2〉 → |3〉 transition (corresponding to the 6S1/2F = 4 → 6P3/2F' = 5 → 6D5/2F" = 6 transition) was excited by using the quantum-correlated signal (Ɛs) and idler (Ɛi) output fields of a subthreshold non-degenerate optical parametric oscillator, which was tuned so that the signal and idler fields were resonant with the |1〉 → |2〉 and |2〉 → |3〉 transitions, respectively. In contrast to excitation with classical fields for which the excitation rate as a function of intensity has always an exponent greater than or equal to two, excitation with squeezed-fields has been theoretically predicted to have an exponent that approaches unity for small enough intensities. This was verified experimentally by probing the exponent down to a slope of 1.3, demonstrating for the first time a purely non-classical effect associated with the interaction of squeezed fields and atoms. In the second part excitation of the two-photon transition by three phase coherent fields Ɛ1 , Ɛ2 and Ɛ0, resonant with the dipole |1〉 → |2〉 and |2〉 → |3〉 and quadrupole |1〉 → |3〉 transitions, respectively, is studied. QI in the excited state population is observed due to two alternative excitation pathways. This is equivalent to nonlinear mixing of the three excitation fields by the atom. Realizing that in the experiment the three fields are spaced in frequency over a range of 25 THz, and extending this scheme to other energy triplets and atoms, leads to the discovery that ranges up to 100's of THz can be bridged in a single mixing step. Motivated by these results, a master equation model has been developed for the system and its properties have been extensively studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the first part of the study, an RF coupled, atmospheric pressure, laminar plasma jet of argon was investigated for thermodynamic equilibrium and some rate processes.

Improved values of transition probabilities for 17 lines of argon I were developed from known values for 7 lines. The effect of inhomogeneity of the source was pointed out.

The temperatures, T, and the electron densities, ne , were determined spectroscopically from the population densities of the higher excited states assuming the Saha-Boltzmann relationship to be valid for these states. The axial velocities, vz, were measured by tracing the paths of particles of boron nitride using a three-dimentional mapping technique. The above quantities varied in the following ranges: 1012 ˂ ne ˂ 1015 particles/cm3, 3500 ˂ T ˂ 11000 °K, and 200 ˂ vz ˂ 1200 cm/sec.

The absence of excitation equilibrium for the lower excitation population including the ground state under certain conditions of T and ne was established and the departure from equilibrium was examined quantitatively. The ground state was shown to be highly underpopulated for the decaying plasma.

Rates of recombination between electrons and ions were obtained by solving the steady-state equation of continuity for electrons. The observed rates were consistent with a dissociative-molecular ion mechanism with a steady-state assumption for the molecular ions.

In the second part of the study, decomposition of NO was studied in the plasma at lower temperatures. The mole fractions of NO denoted by xNO were determined gas-chromatographically and varied between 0.0012 ˂ xNO ˂ 0.0055. The temperatures were measured pyrometrically and varied between 1300 ˂ T ˂ 1750°K. The observed rates of decomposition were orders of magnitude greater than those obtained by the previous workers under purely thermal reaction conditions. The overall activation energy was about 9 kcal/g mol which was considerably lower than the value under thermal conditions. The effect of excess nitrogen was to reduce the rate of decomposition of NO and to increase the order of the reaction with respect to NO from 1.33 to 1.85. The observed rates were consistent with a chain mechanism in which atomic nitrogen and oxygen act as chain carriers. The increased rates of decomposition and the reduced activation energy in the presence of the plasma could be explained on the basis of the observed large amount of atomic nitrogen which was probably formed as the result of reactions between excited atoms and ions of argon and the molecular nitrogen.