14 resultados para Air sampling apparatus.

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The speciation of water in a variety of hydrous silicate glasses, including simple and rhyolitic compositions, synthesized over a range of experimental conditions with up to 11 weight percent water has been determined using infrared spectroscopy. This technique has been calibrated with a series of standard glasses and provides a precise and accurate method for determining the concentrations of molecular water and hydroxyl groups in these glasses.

For all the compositions studied, most of the water is dissolved as hydroxyl groups at total water contents less than 3-4 weight percent; at higher total water contents, molecular water becomes the dominant species. For total water contents above 3-4 weight percent, the amount of water dissolved as hydroxyl groups is approximately constant at about 2 weight percent and additional water is incorporated as molecular water. Although there are small but measurable differences in the ratio of molecular water to hydroxyl groups at a given total water content among these silicate glasses, the speciation of water is similar over this range of composition. The trends in the concentrations of the H-bearing species in the hydrous glasses included in this study are similar to those observed in other silicate glasses using either infrared or NMR spectroscopy.

The effects of pressure and temperature on the speciation of water in albitic glasses have been investigated. The ratio of molecular water to hydroxyl groups at a given total water content is independent of the pressure and temperature of equilibration for albitic glasses synthesized in rapidly quenching piston cylinder apparatus at temperatures greater than 1000°C and pressures greater than 8 kbar. For hydrous glasses quenched from melts cooled at slower rates (i.e., in internally heated or in air-quench cold seal pressure vessels), there is an increase in the ratio of molecular water to hydroxyl group content that probably reflects reequilibration of the melt to lower temperatures during slow cooling.

Molecular water and hydroxyl group concentrations in glasses provide information on the dissolution mechanisms of water in silicate liquids. Several mixing models involving homogeneous equilibria of the form H_2O + O = 20H among melt species have been explored for albitic melts. These models can account for the measured species concentrations if the effects of non-ideal behavior or mixing of polymerized units are included, or by allowing for the presence of several different types of anhydrous species.

A thermodynamic model for hydrous albitic melts has been developed based on the assumption that the activity of water in the melt is equal to the mole fraction of molecular water determined by infrared spectroscopy. This model can account for the position of the watersaturated solidus of crystalline albite, the pressure and temperature dependence of the solubility of water in albitic melt, and the volumes of hydrous albitic melts. To the extent that it is successful, this approach provides a direct link between measured species concentrations in hydrous albitic glasses and the macroscopic thermodynamic properties of the albite-water system.

The approach taken in modelling the thermodynamics of hydrous albitic melts has been generalized to other silicate compositions. Spectroscopic measurements of species concentrations in rhyolitic and simple silicate glasses quenched from melts equilibrated with water vapor provide important constraints on the thermodynamic properties of these melt-water systems. In particular, the assumption that the activity of water is equal to the mole fraction of molecular water has been tested in detail and shown to be a valid approximation for a range of hydrous silicate melts and the partial molar volume of water in these systems has been constrained. Thus, the results of this study provide a useful thermodynamic description of hydrous melts that can be readily applied to other melt-water systems for which spectroscopic measurements of the H-bearing species are available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proton transfer reactions at the interface of water with hydrophobic media, such as air or lipids, are ubiquitous on our planet. These reactions orchestrate a host of vital phenomena in the environment including, for example, acidification of clouds, enzymatic catalysis, chemistries of aerosol and atmospheric gases, and bioenergetic transduction. Despite their importance, however, quantitative details underlying these interactions have remained unclear. Deeper insight into these interfacial reactions is also required in addressing challenges in green chemistry, improved water quality, self-assembly of materials, the next generation of micro-nanofluidics, adhesives, coatings, catalysts, and electrodes. This thesis describes experimental and theoretical investigation of proton transfer reactions at the air-water interface as a function of hydration gradients, electrochemical potential, and electrostatics. Since emerging insights hold at the lipid-water interface as well, this work is also expected to aid understanding of complex biological phenomena associated with proton migration across membranes.

Based on our current understanding, it is known that the physicochemical properties of the gas-phase water are drastically different from those of bulk water. For example, the gas-phase hydronium ion, H3O+(g), can protonate most (non-alkane) organic species, whereas H3O+(aq) can neutralize only relatively strong bases. Thus, to be able to understand and engineer water-hydrophobe interfaces, it is imperative to investigate this fluctuating region of molecular thickness wherein the ‘function’ of chemical species transitions from one phase to another via steep gradients in hydration, dielectric constant, and density. Aqueous interfaces are difficult to approach by current experimental techniques because designing experiments to specifically sample interfacial layers (< 1 nm thick) is an arduous task. While recent advances in surface-specific spectroscopies have provided valuable information regarding the structure of aqueous interfaces, but structure alone is inadequate to decipher the function. By similar analogy, theoretical predictions based on classical molecular dynamics have remained limited in their scope.

Recently, we have adapted an analytical electrospray ionization mass spectrometer (ESIMS) for probing reactions at the gas-liquid interface in real time. This technique is direct, surface-specific,and provides unambiguous mass-to-charge ratios of interfacial species. With this innovation, we have been able to investigate the following:

1. How do anions mediate proton transfers at the air-water interface?

2. What is the basis for the negative surface potential at the air-water interface?

3. What is the mechanism for catalysis ‘on-water’?

In addition to our experiments with the ESIMS, we applied quantum mechanics and molecular dynamics to simulate our experiments toward gaining insight at the molecular scale. Our results unambiguously demonstrated the role of electrostatic-reorganization of interfacial water during proton transfer events. With our experimental and theoretical results on the ‘superacidity’ of the surface of mildly acidic water, we also explored implications on atmospheric chemistry and green chemistry. Our most recent results explained the basis for the negative charge of the air-water interface and showed that the water-hydrophobe interface could serve as a site for enhanced autodissociation of water compared to the condensed phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Humans are able of distinguishing more than 5000 visual categories even in complex environments using a variety of different visual systems all working in tandem. We seem to be capable of distinguishing thousands of different odors as well. In the machine learning community, many commonly used multi-class classifiers do not scale well to such large numbers of categories. This thesis demonstrates a method of automatically creating application-specific taxonomies to aid in scaling classification algorithms to more than 100 cate- gories using both visual and olfactory data. The visual data consists of images collected online and pollen slides scanned under a microscope. The olfactory data was acquired by constructing a small portable sniffing apparatus which draws air over 10 carbon black polymer composite sensors. We investigate performance when classifying 256 visual categories, 8 or more species of pollen and 130 olfactory categories sampled from common household items and a standardized scratch-and-sniff test. Taxonomies are employed in a divide-and-conquer classification framework which improves classification time while allowing the end user to trade performance for specificity as needed. Before classification can even take place, the pollen counter and electronic nose must filter out a high volume of background “clutter” to detect the categories of interest. In the case of pollen this is done with an efficient cascade of classifiers that rule out most non-pollen before invoking slower multi-class classifiers. In the case of the electronic nose, much of the extraneous noise encountered in outdoor environments can be filtered using a sniffing strategy which preferentially samples the visensor response at frequencies that are relatively immune to background contributions from ambient water vapor. This combination of efficient background rejection with scalable classification algorithms is tested in detail for three separate projects: 1) the Caltech-256 Image Dataset, 2) the Caltech Automated Pollen Identification and Counting System (CAPICS) and 3) a portable electronic nose specially constructed for outdoor use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.

The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.

In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Regression analyses are performed on in vivo hemodialysis data for the transfer of creatinine, urea, uric acid and inorganic phosphate to determine the effects of variations in certain parameters on the efficiency of dialysis with a Kiil dialyzer. In calculating the mass transfer rates across the membrane, the effects of cell-plasma mass transfer kinetics are considered. The concept of the effective permeability coefficient for the red cell membrane is introduced to account for these effects. A discussion of the consequences of neglecting cell-plasma kinetics, as has been done to date in the literature, is presented.

A physical model for the Kiil dialyzer is presented in order to calculate the available membrane area for mass transfer, the linear blood and dialysate velocities, and other variables. The equations used to determine the independent variables of the regression analyses are presented. The potential dependent variables in the analyses are discussed.

Regression analyses were carried out considering overall mass-transfer coefficients, dialysances, relative dialysances, and relative permeabilities for each substance as the dependent variables. The independent variables were linear blood velocity, linear dialysate velocity, the pressure difference across the membrane, the elapsed time of dialysis, the blood hematocrit, and the arterial plasma concentrations of each substance transferred. The resulting correlations are tabulated, presented graphically, and discussed. The implications of these correlations are discussed from the viewpoint of a research investigator and from the viewpoint of patient treatment.

Recommendations for further experimental work are presented.

Part II

The interfacial structure of concurrent air-water flow in a two-inch diameter horizontal tube in the wavy flow regime has been measured using resistance wave gages. The median water depth, r.m.s. wave height, wave frequency, extrema frequency, and wave velocity have been measured as functions of air and water flow rates. Reynolds numbers, Froude numbers, Weber numbers, and bulk velocities for each phase may be calculated from these measurements. No theory for wave formation and propagation available in the literature was sufficient to describe these results.

The water surface level distribution generally is not adequately represented as a stationary Gaussian process. Five types of deviation from the Gaussian process function were noted in this work. The presence of the tube walls and the relatively large interfacial shear stresses precludes the use of simple statistical analyses to describe the interfacial structure. A detailed study of the behavior of individual fluid elements near the interface may be necessary to describe adequately wavy two-phase flow in systems similar to the one used in this work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Isoprene (ISO),the most abundant non-methane VOC, is the major contributor to secondary organic aerosols (SOA) formation. The mechanisms involved in such transformation, however, are not fully understood. Current mechanisms, which are based on the oxidation of ISO in the gas-phase, underestimate SOA yields. The heightened awareness that ISO is only partially processed in the gas-phase has turned attention to heterogeneous processes as alternative pathways toward SOA.

During my research project, I investigated the photochemical oxidation of isoprene in bulk water. Below, I will report on the λ > 305 nm photolysis of H2O2 in dilute ISO solutions. This process yields C10H15OH species as primary products, whose formation both requires and is inhibited by O2. Several isomers of C10H15OH were resolved by reverse-phase high-performance liquid chromatography and detected as MH+ (m/z = 153) and MH+-18 (m/z = 135) signals by electrospray ionization mass spectrometry. This finding is consistent with the addition of ·OH to ISO, followed by HO-ISO· reactions with ISO (in competition with O2) leading to second generation HO(ISO)2· radicals that terminate as C10H15OH via β-H abstraction by O2.

It is not generally realized that chemistry on the surface of water cannot be deduced, extrapolated or translated to those in bulk gas and liquid phases. The water density drops a thousand-fold within a few Angstroms through the gas-liquid interfacial region and therefore hydrophobic VOCs such as ISO will likely remain in these relatively 'dry' interfacial water layers rather than proceed into bulk water. In previous experiments from our laboratory, it was found that gas-phase olefins can be protonated on the surface of pH < 4 water. This phenomenon increases the residence time of gases at the interface, an event that makes them increasingly susceptible to interaction with gaseous atmospheric oxidants such as ozone and hydroxyl radicals.

In order to test this hypothesis, I carried out experiments in which ISO(g) collides with the surface of aqueous microdroplets of various compositions. Herein I report that ISO(g) is oxidized into soluble species via Fenton chemistry on the surface of aqueous Fe(II)Cl2 solutions simultaneously exposed to H2O2(g). Monomer and oligomeric species (ISO)1-8H+ were detected via online electrospray ionization mass spectrometry (ESI-MS) on the surface of pH ~ 2 water, and were then oxidized into a suite of products whose combined yields exceed ~ 5% of (ISO)1-8H+. MS/MS analysis revealed that products mainly consisted of alcohols, ketones, epoxides and acids. Our experiments demonstrated that olefins in ambient air may be oxidized upon impact on the surface of Fe-containing aqueous acidic media, such as those of typical to tropospheric aerosols.

Related experiments involving the reaction of ISO(g) with ·OH radicals from the photolysis of dissolved H2O2 were also carried out to test the surface oxidation of ISO(g) by photolyzing H2O2(aq) at 266 nm at various pH. The products were analyzed via online electrospray ionization mass spectrometry. Similar to our Fenton experiments, we detected (ISO)1-7H+ at pH < 4, and new m/z+ = 271 and m/z- = 76 products at pH > 5.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

A study of the thermal reaction of water vapor and parts-per-million concentrations of nitrogen dioxide was carried out at ambient temperature and at atmospheric pressure. Nitric oxide and nitric acid vapor were the principal products. The initial rate of disappearance of nitrogen dioxide was first order with respect to water vapor and second order with respect to nitrogen dioxide. An initial third-order rate constant of 5.5 (± 0.29) x 104 liter2 mole-2 sec-1 was found at 25˚C. The rate of reaction decreased with increasing temperature. In the temperature range of 25˚C to 50˚C, an activation energy of -978 (± 20) calories was found.

The reaction did not go to completion. From measurements as the reaction approached equilibrium, the free energy of nitric acid vapor was calculated. This value was -18.58 (± 0.04) kilocalories at 25˚C.

The initial rate of reaction was unaffected by the presence of oxygen and was retarded by the presence of nitric oxide. There were no appreciable effects due to the surface of the reactor. Nitric oxide and nitrogen dioxide were monitored by gas chromatography during the reaction.

Part II

The air oxidation of nitric oxide, and the oxidation of nitric oxide in the presence of water vapor, were studied in a glass reactor at ambient temperatures and at atmospheric pressure. The concentration of nitric oxide was less than 100 parts-per-million. The concentration of nitrogen dioxide was monitored by gas chromatography during the reaction.

For the dry oxidation, the third-order rate constant was 1.46 (± 0.03) x 104 liter2 mole-2 sec-1 at 25˚C. The activation energy, obtained from measurements between 25˚C and 50˚C, was -1.197 (±0.02) kilocalories.

The presence of water vapor during the oxidation caused the formation of nitrous acid vapor when nitric oxide, nitrogen dioxide and water vapor combined. By measuring the difference between the concentrations of nitrogen dioxide during the wet and dry oxidations, the rate of formation of nitrous acid vapor was found. The third-order rate constant for the formation of nitrous acid vapor was equal to 1.5 (± 0.5) x 105 liter2 mole-2 sec-1 at 40˚C. The reaction rate did not change measurably when the temperature was increased to 50˚C. The formation of nitric acid vapor was prevented by keeping the concentration of nitrogen dioxide low.

Surface effects were appreciable for the wet tests. Below 35˚C, the rate of appearance of nitrogen dioxide increased with increasing surface. Above 40˚C, the effect of surface was small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. PREAMBLE AND SCOPE

Brief introductory remarks, together with a definition of the scope of the material discussed in the thesis, are given.

II. A STUDY OF THE DYNAMICS OF TRIPLET EXCITONS IN MOLECULAR CRYSTALS

Phosphorescence spectra of pure crystalline naphthalene at room temperature and at 77˚ K are presented. The lifetime of the lowest triplet 3B1u state of the crystal is determined from measurements of the time-dependence of the phosphorescence decay after termination of the excitation light. The fact that this lifetime is considerably shorter in the pure crystal at room temperature than in isotopic mixed crystals at 4.2˚ K is discussed, with special importance being attached to the mobility of triplet excitons in the pure crystal.

Excitation spectra of the delayed fluorescence and phosphorescence from crystalline naphthalene and anthracene are also presented. The equation governing the time- and spatial-dependence of the triplet exciton concentration in the crystal is discussed, along with several approximate equations obtained from the general equation under certain simplifying assumptions. The influence of triplet exciton diffusion on the observed excitation spectra and the possibility of using the latter to investigate the former is also considered. Calculations of the delayed fluorescence and phosphorescence excitation spectra of crystalline naphthalene are described.

A search for absorption of additional light quanta by triplet excitons in naphthalene and anthracene crystals failed to produce any evidence for the phenomenon. This apparent absence of triplet-triplet absorption in pure crystals is attributed to a low steady-state triplet concentration, due to processes like triplet-triplet annihilation, resulting in an absorption too weak to be detected with the apparatus used in the experiments. A comparison of triplet-triplet absorption by naphthalene in a glass at 77˚ K with that by naphthalene-h8 in naphthalene-d8 at 4.2˚ K is given. A broad absorption in the isotopic mixed crystal triplet-triplet spectrum has been tentatively interpreted in terms of coupling between the guest 3B1u state and the conduction band and charge-transfer states of the host crystal.

III. AN INVESTIGATION OF DELAYED LIGHT EMISSION FROM Chlorella Pyrenoidosa

An apparatus capable of measuring emission lifetimes in the range 5 X 10-9 sec to 6 X 10-3 sec is described in detail. A cw argon ion laser beam, interrupted periodically by means of an electro-optic shutter, serves as the excitation source. Rapid sampling techniques coupled with signal averaging and digital data acquisition comprise the sensitive detection and readout portion of the apparatus. The capabilities of the equipment are adequately demonstrated by the results of a determination of the fluorescence lifetime of 5, 6, 11, 12-tetraphenyl-naphthacene in benzene solution at room temperature. Details of numerical methods used in the final data reduction are also described.

The results of preliminary measurements of delayed light emission from Chlorella Pyrenoidosa in the range 10-3 sec to 1 sec are presented. Effects on the emission of an inhibitor and of variations in the excitation light intensity have been investigated. Kinetic analysis of the emission decay curves obtained under these various experimental conditions indicate that in the millisecond-to-second time interval the decay is adequately described by the sum of two first-order decay processes. The values of the time constants of these processes appear to be sensitive both to added inhibitor and to excitation light intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mass transfer from wetted surfaces on one-inch cylinders with unwetted approach sections was studied experimentally by means of the evaporation of n-octane and n-heptane into an air stream in axisymmetrical flow, for Reynolds numbers from 5,000 to 310,000. A transition from the laminar to the turbulent boundary layer was observed to occur at Reynolds numbers from 10,000 to 15,000. The results were expressed in terms of the Sherwood number as a function of the Reynolds number, the Schmidt number, and the ratio of the unwetted approach length to the total length. Empirical formulas were obtained for both laminar and turbulent regimes. The rates of mass transfer obtained were higher than theoretical and experimental results obtained by previous investigators for mass and heat transfer from flat plates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

The latent heat of vaporization of n-decane is measured calorimetrically at temperatures between 160° and 340°F. The internal energy change upon vaporization, and the specific volume of the vapor at its dew point are calculated from these data and are included in this work. The measurements are in excellent agreement with available data at 77° and also at 345°F, and are presented in graphical and tabular form.

Part II

Simultaneous material and energy transport from a one-inch adiabatic porous cylinder is studied as a function of free stream Reynolds Number and turbulence level. Experimental data is presented for Reynolds Numbers between 1600 and 15,000 based on the cylinder diameter, and for apparent turbulence levels between 1.3 and 25.0 per cent. n-heptane and n-octane are the evaporating fluids used in this investigation.

Gross Sherwood Numbers are calculated from the data and are in substantial agreement with existing correlations of the results of other workers. The Sherwood Numbers, characterizing mass transfer rates, increase approximately as the 0.55 power of the Reynolds Number. At a free stream Reynolds Number of 3700 the Sherwood Number showed a 40% increase as the apparent turbulence level of the free stream was raised from 1.3 to 25 per cent.

Within the uncertainties involved in the diffusion coefficients used for n-heptane and n-octane, the Sherwood Numbers are comparable for both materials. A dimensionless Frössling Number is computed which characterizes either heat or mass transfer rates for cylinders on a comparable basis. The calculated Frössling Numbers based on mass transfer measurements are in substantial agreement with Frössling Numbers calculated from the data of other workers in heat transfer.