9 resultados para CONTAMINATION

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The uptake of Cu, Zn, and Cd by fresh water plankton was studied by analyzing samples of water and plankton from six lakes in southern California. Co, Pb, Mn, Fe, Na, K, Mg, Ca, Sr, Ba, and Al were also determined in the plankton samples. Special precautions were taken during sampling and analysis to avoid metal contamination.

The relation between aqueous metal concentrations and the concentrations of metals in plankton was studied by plotting aqueous and plankton metal concentrations vs time and comparing the plots. No plankton metal plot showed the same changes as its corresponding aqueous metal plot, though long-term trends were similar. Thus, passive sorption did not completely explain plankton metal uptake.

The fractions of Cu, Zn, and Cd in lake water which were associated with plankton were calculated and these fractions were less than 1% in every case.

To see whether or not plankton metal uptake could deplete aqueous metal concentrations by measurable amounts (e.g. 20%) in short periods (e.g. less than six days), three integrated rate equations were used as models of plankton metal sorption. Parameters for the equations were taken from actual field measurements. Measurable reductions in concentration within short times were predicted by all three equations when the concentration factor was greater than 10^5. All Cu concentration factors were less than 10^5.

The role of plankton was regulating metal concentrations considered in the context of a model of trace metal chemistry in lakes. The model assumes that all particles can be represented by a single solid phase and that the solid phase controls aqueous metal concentrations. A term for the rate of in situ production of particulate matter is included and primary productivity was used for this parameter. In San Vicente Reservoir, the test case, the rate of in situ production of particulate matter was of the same order of magnitude as the rate of introduction of particulate matter by the influent stream.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I.

In recent years, backscattering spectrometry has become an important tool for the analysis of thin films. An inherent limitation, though, is the loss of depth resolution due to energy straggling of the beam. To investigate this, energy straggling of 4He has been measured in thin films of Ni, Al, Au and Pt. Straggling is roughly proportional to square root of thickness, appears to have a slight energy dependence and generally decreases with decreasing atomic number of the adsorber. The results are compared with predictions of theory and with previous measurements. While Ni measurements are in fair agreement with Bohr's theory, Al measurements are 30% above and Au measurements are 40% below predicted values. The Au and Pt measurements give straggling values which are close to one another.

Part II.

MeV backscattering spectrometry and X-ray diffraction are used to investigate the behavior of sputter-deposited Ti-W mixed films on Si substrates. During vacuum anneals at temperatures near 700°C for several hours, the metallization layer reacts with the substrate. Backscattering analysis shows that the resulting compound layer is uniform in composition and contains Ti, Wand Si. The Ti:W ratio in the compound corresponds to that of the deposited metal film. X-ray analyses with Reed and Guinier cameras reveal the presence of the ternary TixW(1-x)Si2 compound. Its composition is unaffected by oxygen contamination during annealing, but the reaction rate is affected. The rate measured on samples with about 15% oxygen contamination after annealing is linear, of the order of 0.5 Å per second at 725°C, and depends on the crystallographic orientation of the substrate and the dc bias during sputter-deposition of the Ti-W film.

Au layers of about 1000 Å thickness were deposited onto unreacted Ti-W films on Si. When annealed at 400°C these samples underwent a color change,and SEM micrographs of the samples showed that an intricate pattern of fissures which were typically 3µm wide had evolved. Analysis by electron microprobe revealed that Au had segregated preferentially into the fissures. This result suggests that Ti-W is not a barrier to Au-Si intermixing at 400°C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work contains 4 topics dealing with the properties of the luminescence from Ge.

The temperature, pump-power and time dependences of the photoluminescence spectra of Li-, As-, Ga-, and Sb-doped Ge crystals were studied. For impurity concentrations less than about 1015cm-3, emissions due to electron-hole droplets can clearly be identified. For impurity concentrations on the order of 1016cm-3, the broad lines in the spectra, which have previously been attributed to the emission from the electron-hole-droplet, were found to possess pump-power and time dependent line shape. These properties show that these broad lines cannot be due to emission of electron-hole-droplets alone. We interpret these lines to be due to a combination of emissions from (1) electron-hole- droplets, (2) broadened multiexciton complexes, (3) broadened bound-exciton, and (4) plasma of electrons and holes. The properties of the electron-hole-droplet in As-doped Ge were shown to agree with theoretical predictions.

The time dependences of the luminescence intensities of the electron-hole-droplet in pure and doped Ge were investigated at 2 and 4.2°K. The decay of the electron-hole-droplet in pure Ge at 4.2°K was found to be pump-power dependent and too slow to be explained by the widely accepted model due to Pokrovskii and Hensel et al. Detailed study of the decay of the electron-hole-droplets in doped Ge were carried out for the first time, and we find no evidence of evaporation of excitons by electron-hole-droplets at 4.2°K. This doped Ge result is unexplained by the model of Pokrovskii and Hensel et al. It is shown that a model based on a cloud of electron-hole-droplets generated in the crystal and incorporating (1) exciton flow among electron-hole-droplets in the cloud and (2) exciton diffusion away from the cloud is capable of explaining the observed results.

It is shown that impurities, introduced during device fabrication, can lead to the previously reported differences of the spectra of laser-excited high-purity Ge and electrically excited Ge double injection devices. By properly choosing the device geometry so as to minimize this Li contamination, it is shown that the Li concentration in double injection devices may be reduced to less than about 1015cm-3 and electrically excited luminescence spectra similar to the photoluminescence spectra of pure Ge may be produced. This proves conclusively that electron-hole-droplets may be created in double injection devices by electrical excitation.

The ratio of the LA- to TO-phonon-assisted luminescence intensities of the electron-hole-droplet is demonstrated to be equal to the high temperature limit of the same ratio of the exciton for Ge. This result gives one confidence to determine similar ratios for the electron-hole-droplet from the corresponding exciton ratio in semiconductors in which the ratio for the electron-hole-droplet cannot be determined (e.g., Si and GaP). Knowing the value of this ratio for the electron-hole-droplet, one can obtain accurate values of many parameters of the electron-hole-droplet in these semiconductors spectroscopically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While synoptic surveys in the optical and at high energies have revealed a rich discovery phase space of slow transients, a similar yield is still awaited in the radio. Majority of the past blind surveys, carried out with radio interferometers, have suffered from a low yield of slow transients, ambiguous transient classifications, and contamination by false positives. The newly-refurbished Karl G. Jansky Array (Jansky VLA) offers wider bandwidths for accurate RFI excision as well as substantially-improved sensitivity and survey speed compared with the old VLA. The Jansky VLA thus eliminates the pitfalls of interferometric transient search by facilitating sensitive, wide-field, and near-real-time radio surveys and enabling a systematic exploration of the dynamic radio sky. This thesis aims at carrying out blind Jansky VLA surveys for characterizing the radio variable and transient sources at frequencies of a few GHz and on timescales between days and years. Through joint radio and optical surveys, the thesis addresses outstanding questions pertaining to the rates of slow radio transients (e.g. radio supernovae, tidal disruption events, binary neutron star mergers, stellar flares, etc.), the false-positive foreground relevant for the radio and optical counterpart searches of gravitational wave sources, and the beaming factor of gamma-ray bursts. The need for rapid processing of the Jansky VLA data and near-real-time radio transient search has enabled the development of state-of-the-art software infrastructure. This thesis has successfully demonstrated the Jansky VLA as a powerful transient search instrument, and it serves as a pathfinder for the transient surveys planned for the SKA-mid pathfinder facilities, viz. ASKAP, MeerKAT, and WSRT/Apertif.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The disposal of sewage is the most important item in public sanitation. It is the most important present day problem in every city whether large or small. The direct cause of the majority of epidemics is the contamination of the water supply of the city by the excreta of man or animal. Public health varies directly as public sanitation, and if the public sanitation be good, the liability of sickness caused by contamination of the water supply is greatly lessened. When a city outgrows its sewerage system the public health becomes endangered. There are two causes for the increased amount of sewerage, increase in population and increase in industrial and manufacturing wastes. The main problem in this connection is the ultimate disposal of the matter which reaches the sewers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.

II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.

Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.

III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.

IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nuclear resonant reaction 19F(ρ,αγ)16O has been used to perform depth-sensitive analyses of fluorine in lunar samples and carbonaceous chondrites. The resonance at 0.83 MeV (center-of-mass) in this reaction is utilized to study fluorine surface films, with particular interest paid to the outer micron of Apollo 15 green glass, Apollo 17 orange glass, and lunar vesicular basalts. These results are distinguished from terrestrial contamination, and are discussed in terms of a volcanic origin for the samples of interest. Measurements of fluorine in carbonaceous chondrites are used to better define the solar system fluorine abundance. A technique for measurement of carbon on solid surfaces with applications to direct quantitative analysis of implanted solar wind carbon in lunar samples is described.