4 resultados para work histories

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Kwoiek Area of British Columbia contains a pendant or screen of metamorphosed sedimentary and volcanic rocks almost entirely surrounded by a portion of the Coast Range Batholith, and intruded by several dozen stocks. The major metamorphic effects were produced by the quartz diorite batholithic rocks, with minor and later effects by the quartz diorite stocks. The sequence of important metamorphic reactions in the metasedimentary and metavolcanic rocks, ranging in grade from chlorite to sillimanite, is:

1. chlorite + carbonate + muscovite → epidote + biotite

2. chlorite + carbonate → actinolite + epidote

3. chlorite + muscovite → garnet + biotite

4. chlorite + epidote → garnet + hornblende

5. chlorite + muscovite → garnet + staurolite + biotite

6. chlorite + muscovite → aluminum silicate + biotite

7. muscovite + staurolite → garnet + aluminum silicate + biotite

8. staurolite → garnet + aluminum silicate

Continuous reactions, occurring between reactions 5 and 7, are:

A. chlorite + (high Ti) biotite + Al2O3 (from plagioclase?)→ garnet + staurolite + (low Ti) biotite + O2

B. muscovite (phengitic) → garnet + staurolite +muscovite (less phengitic) + O2 (?)

Detailed electron microprobe work on garnet, staurolite, biotite, and chlorite shows that:

(1) The garnet porphyroblasts are zoned according to a depletion model, called the Rayleigh depletion model, which assumes equilibrium between the edge of a growing garnet and the minerals which are unzoned, notably biotite, chlorite, and muscovite, but which assumes disequilibrium within the garnet.

(2) The staurolite porphyroblasts are also zoned, and from their zoning patterns reactions A, B, and 5 are documented. Progressive reduction of iron with increasing grade of metamorphism is also inferred from the staurolite zoning patterns.

(3) During a late period of falling temperature garnet continued to grow and the biotite and chlorite reequilibrated. The biotite, chlorite, and garnet edge compositions can vary from point to point in a given thin section, indicating that the volume of equilibrium at the final stage of metamorphism was only a few cubic microns.

(4) The horizon within the garnet that grew at maximum temperature can be identified. The Mg/Fe ratio of this horizon, if the garnet composition is a limiting composition in the Al2O3 - K2O - FeO - MgO tetrahedron, increases systematically with increasing metamorphic grade. Biotite and chlorite compositions also show a general increase in Mg/Fe ratio with increasing metamorphic grade, but staurolite appears to show the reverse effect.

(5) The Mg/Fe ratio at the maximum temperature horizon of the garnet porphyroblasts is a function of its Mn content as evidenced from the study of five garnet-bearing rocks, collected from one outcrop area, with the same assemblage but with differing proportions of minerals.

An important implication of zoned minerals is that the effective composition of a system in a phase lies on the join between the homogeneous minerals (if there are two) and not within three-or- four-phase fields when a zoned mineral, such as garnet or staurolite, is present in the assemblage.

Study of the three aluminum silicates found in the Kwoiek Area showed that a constant pressure change in polymorphs from andalusite to kyanite to sillimanite took place with increasing temperature. This transition series is best explained by the metastable formation of andalusite.

Photographic materials on pages 15, 121, 160, 162, and 164 are essential and will not reproduce clearly on Xerox copies. Photographic copies should be ordered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.

As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.

Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.

Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.