13 resultados para volatile
em CaltechTHESIS
Resumo:
Isoprene (ISO),the most abundant non-methane VOC, is the major contributor to secondary organic aerosols (SOA) formation. The mechanisms involved in such transformation, however, are not fully understood. Current mechanisms, which are based on the oxidation of ISO in the gas-phase, underestimate SOA yields. The heightened awareness that ISO is only partially processed in the gas-phase has turned attention to heterogeneous processes as alternative pathways toward SOA.
During my research project, I investigated the photochemical oxidation of isoprene in bulk water. Below, I will report on the λ > 305 nm photolysis of H2O2 in dilute ISO solutions. This process yields C10H15OH species as primary products, whose formation both requires and is inhibited by O2. Several isomers of C10H15OH were resolved by reverse-phase high-performance liquid chromatography and detected as MH+ (m/z = 153) and MH+-18 (m/z = 135) signals by electrospray ionization mass spectrometry. This finding is consistent with the addition of ·OH to ISO, followed by HO-ISO· reactions with ISO (in competition with O2) leading to second generation HO(ISO)2· radicals that terminate as C10H15OH via β-H abstraction by O2.
It is not generally realized that chemistry on the surface of water cannot be deduced, extrapolated or translated to those in bulk gas and liquid phases. The water density drops a thousand-fold within a few Angstroms through the gas-liquid interfacial region and therefore hydrophobic VOCs such as ISO will likely remain in these relatively 'dry' interfacial water layers rather than proceed into bulk water. In previous experiments from our laboratory, it was found that gas-phase olefins can be protonated on the surface of pH < 4 water. This phenomenon increases the residence time of gases at the interface, an event that makes them increasingly susceptible to interaction with gaseous atmospheric oxidants such as ozone and hydroxyl radicals.
In order to test this hypothesis, I carried out experiments in which ISO(g) collides with the surface of aqueous microdroplets of various compositions. Herein I report that ISO(g) is oxidized into soluble species via Fenton chemistry on the surface of aqueous Fe(II)Cl2 solutions simultaneously exposed to H2O2(g). Monomer and oligomeric species (ISO)1-8H+ were detected via online electrospray ionization mass spectrometry (ESI-MS) on the surface of pH ~ 2 water, and were then oxidized into a suite of products whose combined yields exceed ~ 5% of (ISO)1-8H+. MS/MS analysis revealed that products mainly consisted of alcohols, ketones, epoxides and acids. Our experiments demonstrated that olefins in ambient air may be oxidized upon impact on the surface of Fe-containing aqueous acidic media, such as those of typical to tropospheric aerosols.
Related experiments involving the reaction of ISO(g) with ·OH radicals from the photolysis of dissolved H2O2 were also carried out to test the surface oxidation of ISO(g) by photolyzing H2O2(aq) at 266 nm at various pH. The products were analyzed via online electrospray ionization mass spectrometry. Similar to our Fenton experiments, we detected (ISO)1-7H+ at pH < 4, and new m/z+ = 271 and m/z- = 76 products at pH > 5.
Resumo:
The photooxidation of volatile organic compounds (VOCs) in the atmosphere can lead to the formation of secondary organic aerosol (SOA), a major component of fine particulate matter. Improvements to air quality require insight into the many reactive intermediates that lead to SOA formation, of which only a small fraction have been measured at the molecular level. This thesis describes the chemistry of secondary organic aerosol (SOA) formation from several atmospherically relevant hydrocarbon precursors. Photooxidation experiments of methoxyphenol and phenolic compounds and C12 alkanes were conducted in the Caltech Environmental Chamber. These experiments include the first photooxidation studies of these precursors run under sufficiently low NOx levels, such that RO2 + HO2 chemistry dominates, an important chemical regime in the atmosphere. Using online Chemical Ionization Mass Spectrometery (CIMS), key gas-phase intermediates that lead to SOA formation in these systems were identified. With complementary particle-phase analyses, chemical mechanisms elucidating the SOA formation from these compounds are proposed.
Three methoxyphenol species (phenol, guaiacol, and syringol) were studied to model potential photooxidation schemes of biomass burning intermediates. SOA yields (ratio of mass of SOA formed to mass of primary organic reacted) exceeding 25% are observed. Aerosol growth is rapid and linear with the organic conversion, consistent with the formation of essentially non-volatile products. Gas and aerosol-phase oxidation products from the guaiacol system show that the chemical mechanism consists of highly oxidized aromatic species in the particle phase. Syringol SOA yields are lower than that of phenol and guaiacol, likely due to unique chemistry dependent on methoxy group position.
The photooxidation of several C12 alkanes of varying structure n-dodecane, 2-methylundecane, cyclododecane, and hexylcyclohexane) were run under extended OH exposure to investigate the effect of molecular structure on SOA yields and photochemical aging. Peroxyhemiacetal formation from the reactions of several multifunctional hydroperoxides and aldehyde intermediates was found to be central to organic growth in all systems, and SOA yields increased with cyclic character of the starting hydrocarbon. All of these studies provide direction for future experiments and modeling in order to lessen outstanding discrepancies between predicted and measured SOA.
Resumo:
Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.
Resumo:
This thesis presents composition measurements for atmospherically relevant inorganic and organic aerosol from laboratory and ambient measurements using the Aerodyne aerosol mass spectrometer. Studies include the oxidation of dodecane in the Caltech environmental chambers, and several aircraft- and ground-based field studies, which include the quantification of wildfire emissions off the coast of California, and Los Angeles urban emissions.
The oxidation of dodecane by OH under low NO conditions and the formation of secondary organic aerosol (SOA) was explored using a gas-phase chemical model, gas-phase CIMS measurements, and high molecular weight ion traces from particle- phase HR-TOF-AMS mass spectra. The combination of these measurements support the hypothesis that particle-phase chemistry leading to peroxyhemiacetal formation is important. Positive matrix factorization (PMF) was applied to the AMS mass spectra which revealed three factors representing a combination of gas-particle partitioning, chemical conversion in the aerosol, and wall deposition.
Airborne measurements of biomass burning emissions from a chaparral fire on the central Californian coast were carried out in November 2009. Physical and chemical changes were reported for smoke ages 0 – 4 h old. CO2 normalized ammonium, nitrate, and sulfate increased, whereas the normalized OA decreased sharply in the first 1.5 - 2 h, and then slowly increased for the remaining 2 h (net decrease in normalized OA). Comparison to wildfire samples from the Yucatan revealed that factors such as relative humidity, incident UV radiation, age of smoke, and concentration of emissions are important for wildfire evolution.
Ground-based aerosol composition is reported for Pasadena, CA during the summer of 2009. The OA component, which dominated the submicron aerosol mass, was deconvolved into hydrocarbon-like organic aerosol (HOA), semi-volatile oxidized organic aerosol (SVOOA), and low-volatility oxidized organic aerosol (LVOOA). The HOA/OA was only 0.08–0.23, indicating that most of Pasadena OA in the summer months is dominated by oxidized OA resulting from transported emissions that have undergone photochemistry and/or moisture-influenced processing, as apposed to only primary organic aerosol emissions. Airborne measurements and model predictions of aerosol composition are reported for the 2010 CalNex field campaign.
Resumo:
Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.
During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.
Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.
The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.
Resumo:
Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.
We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.
We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.
Resumo:
Secondary-ion mass spectrometry (SIMS), electron probe analysis (EPMA), analytical scanning electron microscopy (SEM) and infrared (IR) spectroscopy were used to determine the chemical composition and the mineralogy of sub-micrometer inclusions in cubic diamonds and in overgrowths (coats) on octahedral diamonds from Zaire, Botswana, and some unknown localities.
The inclusions are sub-micrometer in size. The typical diameter encountered during transmission electron microscope (TEM) examination was 0.1-0.5 µm. The micro-inclusions are sub-rounded and their shape is crystallographically controlled by the diamond. Normally they are not associated with cracks or dislocations and appear to be well isolated within the diamond matrix. The number density of inclusions is highly variable on any scale and may reach 10^(11) inclusions/cm^3 in the most densely populated zones. The total concentration of metal oxides in the diamonds varies between 20 and 1270 ppm (by weight).
SIMS analysis yields the average composition of about 100 inclusions contained in the sputtered volume. Comparison of analyses of different volumes of an individual diamond show roughly uniform composition (typically ±10% relative). The variation among the average compositions of different diamonds is somewhat greater (typically ±30%). Nevertheless, all diamonds exhibit similar characteristics, being rich in water, carbonate, SiO_2, and K_2O, and depleted in MgO. The composition of micro-inclusions in most diamonds vary within the following ranges: SiO_2, 30-53%; K_2O, 12-30%; CaO, 8-19%; FeO, 6-11%; Al_2O_3, 3-6%; MgO, 2-6%; TiO_2, 2-4%; Na_2O, 1-5%; P_2O_5, 1-4%; and Cl, 1-3%. In addition, BaO, 1-4%; SrO, 0.7-1.5%; La_2O_3, 0.1-0.3%; Ce_2O_3, 0.3-0.5%; smaller amounts of other rare-earth elements (REE), as well as Mn, Th, and U were also detected by instrumental neutron activation analysis (INAA). Mg/(Fe+Mg), 0.40-0.62 is low compared with other mantle derived phases; K/ AI ratios of 2-7 are very high, and the chondrite-normalized Ce/Eu ratios of 10-21 are also high, indicating extremely fractionated REE patterns.
SEM analyses indicate that individual inclusions within a single diamond are roughly of similar composition. The average composition of individual inclusions as measured with the SEM is similar to that measured by SIMS. Compositional variations revealed by the SEM are larger than those detected by SIMS and indicate a small variability in the composition of individual inclusions. No compositions of individual inclusions were determined that might correspond to mono-mineralic inclusions.
IR spectra of inclusion- bearing zones exhibit characteristic absorption due to: (1) pure diamonds, (2) nitrogen and hydrogen in the diamond matrix; and (3) mineral phases in the micro-inclusions. Nitrogen concentrations of 500-1100 ppm, typical of the micro-inclusion-bearing zones, are higher than the average nitrogen content of diamonds. Only type IaA centers were detected by IR. A yellow coloration may indicate small concentration of type IB centers.
The absorption due to the micro-inclusions in all diamonds produces similar spectra and indicates the presence of hydrated sheet silicates (most likely, Fe-rich clay minerals), carbonates (most likely calcite), and apatite. Small quantities of molecular CO_2 are also present in most diamonds. Water is probably associated with the silicates but the possibility of its presence as a fluid phase cannot be excluded. Characteristic lines of olivine, pyroxene and garnet were not detected and these phases cannot be significant components of the inclusions. Preliminary quantification of the IR data suggests that water and carbonate account for, on average, 20-40 wt% of the micro-inclusions.
The composition and mineralogy of the micro-inclusions are completely different from those of the more common, larger inclusions of the peridotitic or eclogitic assemblages. Their bulk composition resembles that of potassic magmas, such as kimberlites and lamproites, but is enriched in H_2O, CO_3, K_2O, and incompatible elements, and depleted in MgO.
It is suggested that the composition of the micro-inclusions represents a volatile-rich fluid or a melt trapped by the diamond during its growth. The high content of K, Na, P, and incompatible elements suggests that the trapped material found in the micro-inclusions may represent an effective metasomatizing agent. It may also be possible that fluids of similar composition are responsible for the extreme enrichment of incompatible elements documented in garnet and pyroxene inclusions in diamonds.
The origin of the fluid trapped in the micro-inclusions is still uncertain. It may have been formed by incipient melting of a highly metasomatized mantle rocks. More likely, it is the result of fractional crystallization of a potassic parental magma at depth. In either case, the micro-inclusions document the presence of highly potassic fluids or melts at depths corresponding to the diamond stability field in the upper mantle. The phases presently identified in the inclusions are believed to be the result of closed system reactions at lower pressures.
Resumo:
For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.
To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.
I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.
I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.
With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.
Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.
Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.
While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
Trace volatile organic compounds emitted by biogenic and anthropogenic sources into the atmosphere can undergo extensive photooxidation to form species with lower volatility. By equilibrium partitioning or reactive uptake, these compounds can nucleate into new aerosol particles or deposit onto already-existing particles to form secondary organic aerosol (SOA). SOA and other atmospheric particulate matter have measurable effects on global climate and public health, making understanding SOA formation a needed field of scientific inquiry. SOA formation can be done in a laboratory setting, using an environmental chamber; under these controlled conditions it is possible to generate SOA from a single parent compound and study the chemical composition of the gas and particle phases. By studying the SOA composition, it is possible to gain understanding of the chemical reactions that occur in the gas phase and particle phase, and identify potential heterogeneous processes that occur at the surface of SOA particles. In this thesis, mass spectrometric methods are used to identify qualitatively and qualitatively the chemical components of SOA derived from the photooxidation of important anthropogenic volatile organic compounds that are associated with gasoline and diesel fuels and industrial activity (C12 alkanes, toluene, and o-, m-, and p-cresols). The conditions under which SOA was generated in each system were varied to explore the effect of NOx and inorganic seed composition on SOA chemical composition. The structure of the parent alkane was varied to investigate the effect on the functionalization and fragmentation of the resulting oxidation products. Relative humidity was varied in the alkane system as well to measure the effect of increased particle-phase water on condensed-phase reactions. In all systems, oligomeric species, resulting potentially from particle-phase and heterogeneous processes, were identified. Imines produced by reactions between (NH4)2SO4 seed and carbonyl compounds were identified in all systems. Multigenerational photochemistry producing low- and extremely low-volatility organic compounds (LVOC and ELVOC) was reflected strongly in the particle-phase composition as well.
Resumo:
Planets are assembled from the gas, dust, and ice in the accretion disks that encircle young stars. Ices of chemical compounds with low condensation temperatures (<200 K), the so-called volatiles, dominate the solid mass reservoir from which planetesimals are formed and are thus available to build the protoplanetary cores of gas/ice giant planets. It has long been thought that the regions near the condensation fronts of volatiles are preferential birth sites of planets. Moreover, the main volatiles in disks are also the main C-and O-containing species in (exo)planetary atmospheres. Understanding the distribution of volatiles in disks and their role in planet-formation processes is therefore of great interest.
This thesis addresses two fundamental questions concerning the nature of volatiles in planet-forming disks: (1) how are volatiles distributed throughout a disk, and (2) how can we use volatiles to probe planet-forming processes in disks? We tackle the first question in two complementary ways. We have developed a novel super-resolution method to constrain the radial distribution of volatiles throughout a disk by combining multi-wavelength spectra. Thanks to the ordered velocity and temperature profiles in disks, we find that detailed constraints can be derived even with spatially and spectrally unresolved data -- provided a wide range of energy levels are sampled. We also employ high-spatial resolution interferometric images at (sub)mm frequencies using the Atacama Large Millimeter Array (ALMA) to directly measure the radial distribution of volatiles.
For the second question, we combine volatile gas emission measurements with those of the dust continuum emission or extinction to understand dust growth mechanisms in disks and disk instabilities at planet-forming distances from the central star. Our observations and models support the idea that the water vapor can be concentrated in regions near its condensation front at certain evolutionary stages in the lifetime of protoplanetary disks, and that fast pebble growth is likely to occur near the condensation fronts of various volatile species.
Resumo:
I report the solubility and diffusivity of water in lunar basalt and an iron-free basaltic analogue at 1 atm and 1350 °C. Such parameters are critical for understanding the degassing histories of lunar pyroclastic glasses. Solubility experiments have been conducted over a range of fO2 conditions from three log units below to five log units above the iron-wüstite buffer (IW) and over a range of pH2/pH2O from 0.03 to 24. Quenched experimental glasses were analyzed by Fourier transform infrared spectroscopy (FTIR) and secondary ionization mass spectrometry (SIMS) and were found to contain up to ~420 ppm water. Results demonstrate that, under the conditions of our experiments: (1) hydroxyl is the only H-bearing species detected by FTIR; (2) the solubility of water is proportional to the square root of pH2O in the furnace atmosphere and is independent of fO2 and pH2/pH2O; (3) the solubility of water is very similar in both melt compositions; (4) the concentration of H2 in our iron-free experiments is <3 ppm, even at oxygen fugacities as low as IW-2.3 and pH2/pH2O as high as 24; and (5) SIMS analyses of water in iron-rich glasses equilibrated under variable fO2 conditions can be strongly influenced by matrix effects, even when the concentrations of water in the glasses are low. Our results can be used to constrain the entrapment pressure of the lunar melt inclusions of Hauri et al. (2011).
Diffusion experiments were conducted over a range of fO2 conditions from IW-2.2 to IW+6.7 and over a range of pH2/pH2O from nominally zero to ~10. The water concentrations measured in our quenched experimental glasses by SIMS and FTIR vary from a few ppm to ~430 ppm. Water concentration gradients are well described by models in which the diffusivity of water (D*water) is assumed to be constant. The relationship between D*water and water concentration is well described by a modified speciation model (Ni et al. 2012) in which both molecular water and hydroxyl are allowed to diffuse. The success of this modified speciation model for describing our results suggests that we have resolved the diffusivity of hydroxyl in basaltic melt for the first time. Best-fit values of D*water for our experiments on lunar basalt vary within a factor of ~2 over a range of pH2/pH2O from 0.007 to 9.7, a range of fO2 from IW-2.2 to IW+4.9, and a water concentration range from ~80 ppm to ~280 ppm. The relative insensitivity of our best-fit values of D*water to variations in pH2 suggests that H2 diffusion was not significant during degassing of the lunar glasses of Saal et al. (2008). D*water during dehydration and hydration in H2/CO2 gas mixtures are approximately the same, which supports an equilibrium boundary condition for these experiments. However, dehydration experiments into CO2 and CO/CO2 gas mixtures leave some scope for the importance of kinetics during dehydration into H-free environments. The value of D*water chosen by Saal et al. (2008) for modeling the diffusive degassing of the lunar volcanic glasses is within a factor of three of our measured value in our lunar basaltic melt at 1350 °C.
In Chapter 4 of this thesis, I document significant zonation in major, minor, trace, and volatile elements in naturally glassy olivine-hosted melt inclusions from the Siqueiros Fracture Zone and the Galapagos Islands. Components with a higher concentration in the host olivine than in the melt (MgO, FeO, Cr2O3, and MnO) are depleted at the edges of the zoned melt inclusions relative to their centers, whereas except for CaO, H2O, and F, components with a lower concentration in the host olivine than in the melt (Al2O3, SiO2, Na2O, K2O, TiO2, S, and Cl) are enriched near the melt inclusion edges. This zonation is due to formation of an olivine-depleted boundary layer in the adjacent melt in response to cooling and crystallization of olivine on the walls of the melt inclusions concurrent with diffusive propagation of the boundary layer toward the inclusion center.
Concentration profiles of some components in the melt inclusions exhibit multicomponent diffusion effects such as uphill diffusion (CaO, FeO) or slowing of the diffusion of typically rapidly diffusing components (Na2O, K2O) by coupling to slow diffusing components such as SiO2 and Al2O3. Concentrations of H2O and F decrease towards the edges of some of the Siqueiros melt inclusions, suggesting either that these components have been lost from the inclusions into the host olivine late in their cooling histories and/or that these components are exhibiting multicomponent diffusion effects.
A model has been developed of the time-dependent evolution of MgO concentration profiles in melt inclusions due to simultaneous depletion of MgO at the inclusion walls due to olivine growth and diffusion of MgO in the melt inclusions in response to this depletion. Observed concentration profiles were fit to this model to constrain their thermal histories. Cooling rates determined by a single-stage linear cooling model are 150–13,000 °C hr-1 from the liquidus down to ~1000 °C, consistent with previously determined cooling rates for basaltic glasses; compositional trends with melt inclusion size observed in the Siqueiros melt inclusions are described well by this simple single-stage linear cooling model. Despite the overall success of the modeling of MgO concentration profiles using a single-stage cooling history, MgO concentration profiles in some melt inclusions are better fit by a two-stage cooling history with a slower-cooling first stage followed by a faster-cooling second stage; the inferred total duration of cooling from the liquidus down to ~1000 °C is 40 s to just over one hour.
Based on our observations and models, compositions of zoned melt inclusions (even if measured at the centers of the inclusions) will typically have been diffusively fractionated relative to the initially trapped melt; for such inclusions, the initial composition cannot be simply reconstructed based on olivine-addition calculations, so caution should be exercised in application of such reconstructions to correct for post-entrapment crystallization of olivine on inclusion walls. Off-center analyses of a melt inclusion can also give results significantly fractionated relative to simple olivine crystallization.
All melt inclusions from the Siqueiros and Galapagos sample suites exhibit zoning profiles, and this feature may be nearly universal in glassy, olivine-hosted inclusions. If so, zoning profiles in melt inclusions could be widely useful to constrain late-stage syneruptive processes and as natural diffusion experiments.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.