15 resultados para Los Angeles (Calif.). Dept. of Water and Power

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid growth and development of Los Angeles City and County has been one of the phenomena of the present age. The growth of a city from 50,600 to 576,000, an increase of over 1000% in thirty years is an unprecedented occurrence. It has given rise to a variety of problems of increasing magnitude.

Chief among these are: supply of food, water and shelter development of industry and markets, prevention and removal of downtown congestion and protection of life and property. These, of course, are the problems that any city must face. But in the case of a community which doubles its population every ten years, radical and heroic measures must often be taken.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In work of this nature it is advisable to state definitely the problem attempted in order that the reader may have a clear understanding of the object of the work undertaken. The problem involved is to determine the efficiency and inefficiency in the operation of the Bureau of Power and Light of Los Angeles, California, as it exists at the present time. This will be more on the order of a government investigation than a purely engineering thesis. An engineering thesis consists or reports based on experiments and tests, etc., while the present undertaking will consist of investigation of the facts concerning the organization, operation and conduct of the business of the Bureau of Power and Light. The facts presented were obtained from several sources: (1) the writer's knowledge of the business; (2) books written on municipal ownership; (3) reports published by the Bureau, and (4) personal interviews with men connected with the organization. I have endeavored to draw conclusions from the facts thus obtained, as to the present status of the Bureau of Power and Light.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On October 24, 1871, a massacre of eighteen Chinese in Los Angeles brought the small southern California settlement into the national spotlight. Within a few days, news of this “night of horrors” was reported in newspapers across the country. This massacre has been cited in Asian American narratives as the first documented outbreak of ethnic violence against a Chinese community in the United States. This is ironic because Los Angeles’ small population has generally placed it on the periphery in historical studies of the California anti-Chinese movement. Because the massacre predated Los Angeles’ organized Chinese exclusion movements of the late 1870s, it has often been erroneously dismissed as an aberration in the history of the city.

The violence of 1871 was an outburst highlighting existing community tensions that would become part of public debate by decade’s close. The purpose of this study is to insert the massacre into a broader context of anti-Chinese sentiments, legal discrimination, and dehumanization in nineteenth century Los Angeles. While a second incident of widespread anti-Chinese violence never occurred, brutal attacks directed at Chinese small businessmen and others highlighted continued community conflict. Similarly, economic rivalries and concerns over Chinese prostitution that underlay the 1871 massacre were manifest in later campaigns of economic discrimination and vice suppression that sought to minimize Chinese influence within municipal limits. An analysis of the massacre in terms of anti-Chinese legal, social and economic strategies in nineteenth-century Los Angeles will elucidate these important continuities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I. Foehn winds of southern California.
An investigation of the hot, dry and dust laden winds occurring in the late fall and early winter in the Los Angeles Basin and attributed in the past to the influences of the desert regions to the north revealed that these currents were of a foehn nature. Their properties were found to be entirely due to dynamical heating produced in the descent from the high level areas in the interior to the lower Los Angeles Basin. Any dust associated with the phenomenon was found to be acquired from the Los Angeles area rather than transported from the desert. It was found that the frequency of occurrence of a mild type foehn of this nature during this season was sufficient to warrant its classification as a winter monsoon. This results from the topography of the Los Angeles region which allows an easy entrance to the air from the interior by virtue of the low level mountain passes north of the area. This monsoon provides the mild winter climate of southern California since temperatures associated with the foehn currents are far higher than those experienced when maritime air from the adjacent Pacific Ocean occupies the region.

II. Foehn wind cyclo-genesis.
Intense anticyclones frequently build up over the high level regions of the Great Basin and Columbia Plateau which lie between the Sierra Nevada and Cascade Mountains to the west and the Rocky Mountains to the east. The outflow from these anticyclones produce extensive foehns east of the Rockies in the comparatively low level areas of the middle west and the Canadian provinces of Alberta and Saskatchewan. Normally at this season of the year very cold polar continental air masses are present over this territory and with the occurrence of these foehns marked discontinuity surfaces arise between the warm foehn current, which is obliged to slide over a colder mass, and the Pc air to the east. Cyclones are easily produced from this phenomenon and take the form of unstable waves which propagate along the discontinuity surface between the two dissimilar masses. A continual series of such cyclones was found to occur as long as the Great Basin anticyclone is maintained with undiminished intensity.

III. Weather conditions associated with the Akron disaster.
This situation illustrates the speedy development and propagation of young disturbances in the eastern United States during the spring of the year under the influence of the conditionally unstable tropical maritime air masses which characterise the region. It also furnishes an excellent example of the superiority of air mass and frontal methods of weather prediction for aircraft operation over the older methods based upon pressure distribution.

IV. The Los Angeles storm of December 30, 1933 to January 1, 1934.
This discussion points out some of the fundamental interactions occurring between air masses of the North Pacific Ocean in connection with Pacific Coast storms and the value of topographic and aerological considerations in predicting them. Estimates of rainfall intensity and duration from analyses of this type may be made and would prove very valuable in the Los Angeles area in connection with flood control problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Los Angeles Harbor at San Pedro with its natural advantages, and the big development of these now underway, will very soon be the key to the traffic routes of Southern California. The Atchison, Topeka, and Santa Fe railway company realizing this and, not wishing to be caught asleep, has planned to build a line from El Segundo to the harbor. The developments of the harbor are not the only developments taking place in these localities and the proposed new line is intended to include these also.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Pacoima area is located on an isolated hill in the northeast section of the San Fernando, the northeast portion of the Pacoima Quadrangle, Los Angeles County, California. Within it are exposed more than 2300 feet of Tertiary rocks, which comprise three units of Middle Miocene (?) age, and approximately 950 feet of Jurassic (?) granite basement. The formations are characterized by their mode of occurrence, marine and terrestial origin, diverse lithology, and structural features.

The basement complex is composed of intrusive granite, small masses of granodiorite and a granodiorite gneiss with the development of schistosity in sections. During the long period of erosion of the metamorphics, the granitic rocks were exposed and may have provided clastic constituents for the overlying formations.

As a result of rapid sedimentation in a transitional environment, the Middle Miocene Twin Peaks formation was laid down unconformably on the granite. This formation is essentially a large thinning bed of gray to buff pebble and cobble conglomerate grading to coarse yellow sandstone. The contact of conglomerate and granite is characterized by its faulted and depositional nature.

Beds of extrusive andesite, basalt porphyry, compact vesicular amygdaloidal basalts, andesite breccia, interbedded feldspathic sands and clays of terrestial origin, and mudflow breccia comprise the Pacoima formation which overlies the Twin Peaks formation unconformably. A transgressing shallow sea accompanied settling of the region and initiated deposition of fine clastic sediments.

The marine Topanga (?) formation is composed of brown to gray coarse sandstone grading into interbedded buff sandstones and gray shales. Intrusions of rhyolitedacite and ash beds mark continued but sporatic volcanism during this period.

The area mapped represents an arch in the Tertiary sediments. Forces that produced the uplift of the granite structural high created stresses that were relieved by jointing and faulting. Vertical and horizontal movement along these faults has displaced beds, offset contacts and complicated their structure. Uplift and erosion have exposed the present sequence of beds which dip gently to the northeast. The isolated hill is believed to be in an early stage of maturity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents composition measurements for atmospherically relevant inorganic and organic aerosol from laboratory and ambient measurements using the Aerodyne aerosol mass spectrometer. Studies include the oxidation of dodecane in the Caltech environmental chambers, and several aircraft- and ground-based field studies, which include the quantification of wildfire emissions off the coast of California, and Los Angeles urban emissions.

The oxidation of dodecane by OH under low NO conditions and the formation of secondary organic aerosol (SOA) was explored using a gas-phase chemical model, gas-phase CIMS measurements, and high molecular weight ion traces from particle- phase HR-TOF-AMS mass spectra. The combination of these measurements support the hypothesis that particle-phase chemistry leading to peroxyhemiacetal formation is important. Positive matrix factorization (PMF) was applied to the AMS mass spectra which revealed three factors representing a combination of gas-particle partitioning, chemical conversion in the aerosol, and wall deposition.

Airborne measurements of biomass burning emissions from a chaparral fire on the central Californian coast were carried out in November 2009. Physical and chemical changes were reported for smoke ages 0 – 4 h old. CO2 normalized ammonium, nitrate, and sulfate increased, whereas the normalized OA decreased sharply in the first 1.5 - 2 h, and then slowly increased for the remaining 2 h (net decrease in normalized OA). Comparison to wildfire samples from the Yucatan revealed that factors such as relative humidity, incident UV radiation, age of smoke, and concentration of emissions are important for wildfire evolution.

Ground-based aerosol composition is reported for Pasadena, CA during the summer of 2009. The OA component, which dominated the submicron aerosol mass, was deconvolved into hydrocarbon-like organic aerosol (HOA), semi-volatile oxidized organic aerosol (SVOOA), and low-volatility oxidized organic aerosol (LVOOA). The HOA/OA was only 0.08–0.23, indicating that most of Pasadena OA in the summer months is dominated by oxidized OA resulting from transported emissions that have undergone photochemistry and/or moisture-influenced processing, as apposed to only primary organic aerosol emissions. Airborne measurements and model predictions of aerosol composition are reported for the 2010 CalNex field campaign.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept ofpower over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.

Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.

Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.

The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.

Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.

The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I

A study of the thermal reaction of water vapor and parts-per-million concentrations of nitrogen dioxide was carried out at ambient temperature and at atmospheric pressure. Nitric oxide and nitric acid vapor were the principal products. The initial rate of disappearance of nitrogen dioxide was first order with respect to water vapor and second order with respect to nitrogen dioxide. An initial third-order rate constant of 5.5 (± 0.29) x 104 liter2 mole-2 sec-1 was found at 25˚C. The rate of reaction decreased with increasing temperature. In the temperature range of 25˚C to 50˚C, an activation energy of -978 (± 20) calories was found.

The reaction did not go to completion. From measurements as the reaction approached equilibrium, the free energy of nitric acid vapor was calculated. This value was -18.58 (± 0.04) kilocalories at 25˚C.

The initial rate of reaction was unaffected by the presence of oxygen and was retarded by the presence of nitric oxide. There were no appreciable effects due to the surface of the reactor. Nitric oxide and nitrogen dioxide were monitored by gas chromatography during the reaction.

Part II

The air oxidation of nitric oxide, and the oxidation of nitric oxide in the presence of water vapor, were studied in a glass reactor at ambient temperatures and at atmospheric pressure. The concentration of nitric oxide was less than 100 parts-per-million. The concentration of nitrogen dioxide was monitored by gas chromatography during the reaction.

For the dry oxidation, the third-order rate constant was 1.46 (± 0.03) x 104 liter2 mole-2 sec-1 at 25˚C. The activation energy, obtained from measurements between 25˚C and 50˚C, was -1.197 (±0.02) kilocalories.

The presence of water vapor during the oxidation caused the formation of nitrous acid vapor when nitric oxide, nitrogen dioxide and water vapor combined. By measuring the difference between the concentrations of nitrogen dioxide during the wet and dry oxidations, the rate of formation of nitrous acid vapor was found. The third-order rate constant for the formation of nitrous acid vapor was equal to 1.5 (± 0.5) x 105 liter2 mole-2 sec-1 at 40˚C. The reaction rate did not change measurably when the temperature was increased to 50˚C. The formation of nitric acid vapor was prevented by keeping the concentration of nitrogen dioxide low.

Surface effects were appreciable for the wet tests. Below 35˚C, the rate of appearance of nitrogen dioxide increased with increasing surface. Above 40˚C, the effect of surface was small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No abstract.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I report the solubility and diffusivity of water in lunar basalt and an iron-free basaltic analogue at 1 atm and 1350 °C. Such parameters are critical for understanding the degassing histories of lunar pyroclastic glasses. Solubility experiments have been conducted over a range of fO2 conditions from three log units below to five log units above the iron-wüstite buffer (IW) and over a range of pH2/pH2O from 0.03 to 24. Quenched experimental glasses were analyzed by Fourier transform infrared spectroscopy (FTIR) and secondary ionization mass spectrometry (SIMS) and were found to contain up to ~420 ppm water. Results demonstrate that, under the conditions of our experiments: (1) hydroxyl is the only H-bearing species detected by FTIR; (2) the solubility of water is proportional to the square root of pH2O in the furnace atmosphere and is independent of fO2 and pH2/pH2O; (3) the solubility of water is very similar in both melt compositions; (4) the concentration of H2 in our iron-free experiments is <3 ppm, even at oxygen fugacities as low as IW-2.3 and pH2/pH2O as high as 24; and (5) SIMS analyses of water in iron-rich glasses equilibrated under variable fO2 conditions can be strongly influenced by matrix effects, even when the concentrations of water in the glasses are low. Our results can be used to constrain the entrapment pressure of the lunar melt inclusions of Hauri et al. (2011).

Diffusion experiments were conducted over a range of fO2 conditions from IW-2.2 to IW+6.7 and over a range of pH2/pH2O from nominally zero to ~10. The water concentrations measured in our quenched experimental glasses by SIMS and FTIR vary from a few ppm to ~430 ppm. Water concentration gradients are well described by models in which the diffusivity of water (D*water) is assumed to be constant. The relationship between D*water and water concentration is well described by a modified speciation model (Ni et al. 2012) in which both molecular water and hydroxyl are allowed to diffuse. The success of this modified speciation model for describing our results suggests that we have resolved the diffusivity of hydroxyl in basaltic melt for the first time. Best-fit values of D*water for our experiments on lunar basalt vary within a factor of ~2 over a range of pH2/pH2O from 0.007 to 9.7, a range of fO2 from IW-2.2 to IW+4.9, and a water concentration range from ~80 ppm to ~280 ppm. The relative insensitivity of our best-fit values of D*water to variations in pH2 suggests that H2 diffusion was not significant during degassing of the lunar glasses of Saal et al. (2008). D*water during dehydration and hydration in H2/CO2 gas mixtures are approximately the same, which supports an equilibrium boundary condition for these experiments. However, dehydration experiments into CO2 and CO/CO2 gas mixtures leave some scope for the importance of kinetics during dehydration into H-free environments. The value of D*water chosen by Saal et al. (2008) for modeling the diffusive degassing of the lunar volcanic glasses is within a factor of three of our measured value in our lunar basaltic melt at 1350 °C.

In Chapter 4 of this thesis, I document significant zonation in major, minor, trace, and volatile elements in naturally glassy olivine-hosted melt inclusions from the Siqueiros Fracture Zone and the Galapagos Islands. Components with a higher concentration in the host olivine than in the melt (MgO, FeO, Cr2O3, and MnO) are depleted at the edges of the zoned melt inclusions relative to their centers, whereas except for CaO, H2O, and F, components with a lower concentration in the host olivine than in the melt (Al2O3, SiO2, Na2O, K2O, TiO2, S, and Cl) are enriched near the melt inclusion edges. This zonation is due to formation of an olivine-depleted boundary layer in the adjacent melt in response to cooling and crystallization of olivine on the walls of the melt inclusions concurrent with diffusive propagation of the boundary layer toward the inclusion center.

Concentration profiles of some components in the melt inclusions exhibit multicomponent diffusion effects such as uphill diffusion (CaO, FeO) or slowing of the diffusion of typically rapidly diffusing components (Na2O, K2O) by coupling to slow diffusing components such as SiO2 and Al2O3. Concentrations of H2O and F decrease towards the edges of some of the Siqueiros melt inclusions, suggesting either that these components have been lost from the inclusions into the host olivine late in their cooling histories and/or that these components are exhibiting multicomponent diffusion effects.

A model has been developed of the time-dependent evolution of MgO concentration profiles in melt inclusions due to simultaneous depletion of MgO at the inclusion walls due to olivine growth and diffusion of MgO in the melt inclusions in response to this depletion. Observed concentration profiles were fit to this model to constrain their thermal histories. Cooling rates determined by a single-stage linear cooling model are 150–13,000 °C hr-1 from the liquidus down to ~1000 °C, consistent with previously determined cooling rates for basaltic glasses; compositional trends with melt inclusion size observed in the Siqueiros melt inclusions are described well by this simple single-stage linear cooling model. Despite the overall success of the modeling of MgO concentration profiles using a single-stage cooling history, MgO concentration profiles in some melt inclusions are better fit by a two-stage cooling history with a slower-cooling first stage followed by a faster-cooling second stage; the inferred total duration of cooling from the liquidus down to ~1000 °C is 40 s to just over one hour.

Based on our observations and models, compositions of zoned melt inclusions (even if measured at the centers of the inclusions) will typically have been diffusively fractionated relative to the initially trapped melt; for such inclusions, the initial composition cannot be simply reconstructed based on olivine-addition calculations, so caution should be exercised in application of such reconstructions to correct for post-entrapment crystallization of olivine on inclusion walls. Off-center analyses of a melt inclusion can also give results significantly fractionated relative to simple olivine crystallization.

All melt inclusions from the Siqueiros and Galapagos sample suites exhibit zoning profiles, and this feature may be nearly universal in glassy, olivine-hosted inclusions. If so, zoning profiles in melt inclusions could be widely useful to constrain late-stage syneruptive processes and as natural diffusion experiments.