25 resultados para work function measurements

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I

The latent heat of vaporization of n-decane is measured calorimetrically at temperatures between 160° and 340°F. The internal energy change upon vaporization, and the specific volume of the vapor at its dew point are calculated from these data and are included in this work. The measurements are in excellent agreement with available data at 77° and also at 345°F, and are presented in graphical and tabular form.

Part II

Simultaneous material and energy transport from a one-inch adiabatic porous cylinder is studied as a function of free stream Reynolds Number and turbulence level. Experimental data is presented for Reynolds Numbers between 1600 and 15,000 based on the cylinder diameter, and for apparent turbulence levels between 1.3 and 25.0 per cent. n-heptane and n-octane are the evaporating fluids used in this investigation.

Gross Sherwood Numbers are calculated from the data and are in substantial agreement with existing correlations of the results of other workers. The Sherwood Numbers, characterizing mass transfer rates, increase approximately as the 0.55 power of the Reynolds Number. At a free stream Reynolds Number of 3700 the Sherwood Number showed a 40% increase as the apparent turbulence level of the free stream was raised from 1.3 to 25 per cent.

Within the uncertainties involved in the diffusion coefficients used for n-heptane and n-octane, the Sherwood Numbers are comparable for both materials. A dimensionless Frössling Number is computed which characterizes either heat or mass transfer rates for cylinders on a comparable basis. The calculated Frössling Numbers based on mass transfer measurements are in substantial agreement with Frössling Numbers calculated from the data of other workers in heat transfer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The organometallic chemistry of the hexagonally close-packed Ru(001) surface has been studied using electron energy loss spectroscopy and thermal desorption mass spectrometry. The molecules that have been studied are acetylene, formamide and ammonia. The chemistry of acetylene and formamide has also been investigated in the presence of coadsorbed hydrogen and oxygen adatoms.

Acetylene is adsorbed molecularly on Ru(001) below approximately 230 K, with rehybridization of the molecule to nearly sp^3 occurring. The principal decomposition products at higher temperatures are ethylidyne (CCH_3) and acetylide (CCH) between 230 and 350 K, and methylidyne (CH) and surface carbon at higher temperatures. Some methylidyne is stable to approximately 700 K. The preadsorption of hydrogen does not alter the decomposition products of acetylene, but reduces the saturation coverage and also leads to the formation of a small amount of ethylene (via an η^2-CHCH_2 species) which desorbs molecularly near 175 K. Preadsorbed oxygen also reduces the saturation coverage of acetylene but has virtually no effect on the nature of the molecularly chemisorbed acetylene. It does, however, lead to the formation of an sp^2-hybridized vinylidene (CCH_2) species in the decomposition of acetylene, in addition to the decomposition products that are formed on the clean surface. There is no molecular desorption of chemisorbed acetylene from clean Ru(001), hydrogen-presaturated Ru(001), or oxygen-presaturated Ru(001).

The adsorption and decomposition of formamide has been studied on clean Ru(001), hydrogen-presaturated Ru(001), and Ru(001)-p(1x2)-O (oxygen adatom coverage = 0.5). On clean Ru(001), the adsorption of low coverages of formamide at 80 K results in CH bond cleavage and rehybridization of the carbonyl double bond to produce an η^2 (C,O)-NH_2CO species. This species is stable to approximately 250 K at which point it decomposes to yield a mixture of coadsorbed carbon monoxide, ammonia, an NH species and hydrogen adatoms. The decomposition of NH to hydrogen and nitrogen adatoms occurs between 350 and 400 K, and the thermal desorption products are NH_3 (-315 K), H_2 (-420 K), CO (-480 K) and N_2 (-770 K). At higher formamide coverages, some formamide is adsorbed molecularly at 80 K, leading both to molecular desorption and to the formation of a new surface intermediate between 300 and 375 K that is identified tentatively as η^1(N)-NCHO. On Ru(001)- p(1x2)-O and hydrogen-presaturated Ru(001), formamide adsorbs molecularly at 80 K in an η^1(O)- NH_2CHO configuration. On the oxygen-precovered surface, the molecularly adsorbed formamide undergoes competing desorption and decomposition, resulting in the formation of an η^2(N,O)-NHCHO species (analogous to a bidentate formate) at approximately 265 K. This species decomposes near 420 K with the evolution of CO and H_2 into the gas phase. On the hydrogen precovered surface, the Η^1(O)-NH_2CHO converts below 200 K to η^2(C,O)-NH_2CHO and η^2(C,O)-NH^2CO, with some molecular desorption occurring also at high coverage. The η^2(C,O)-bonded species decompose in a manner similar to the decomposition of η^2(C,O)-NH_2CO on the clean surface, although the formation of ammonia is not detected.

Ammonia adsorbs reversibly on Ru(001) at 80 K, with negligible dissociation occurring as the surface is annealed The EEL spectra of ammonia on Ru(001) are very similar to those of ammonia on other metal surfaces. Off-specular EEL spectra of chemisorbed ammonia allow the v(Ru-NH_3) and ρ(NH_3) vibrational loss features to be resolved near 340 and 625 cm^(-1), respectively. The intense δ_g (NH_3) loss feature shifts downward in frequency with increasing ammonia coverage, from approximately 1160 cm^(-1) in the low coverage limit to 1070 cm^(-1) at saturation. In coordination compounds of ammonia, the frequency of this mode shifts downward with decreasing charge on the metal atom, and its downshift on Ru(001) can be correlated with the large work function decrease that the surface has previously been shown to undergo when ammonia is adsorbed. The EELS data are consistent with ammonia adsorption in on-top sites. Second-layer and multilayer ammonia on Ru(001) have also been characterized vibrationally, and the results are similar to those obtained for other metal surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The author has constructed a synthetic gene for ∝-lytic protease. Since the DNA sequence of the protein is not known, the gene was designed by using the reverse translation of ∝-lytic protease's amino acid sequence. Unique restriction sites are carefully sought in the degenerate DNA sequence to aid in future mutagenesis studies. The unique restriction sites are designed approximately 50 base pairs apart and their appropriate codons used in the DNA sequence. The codons used to construct the DNA sequence of ∝-lytic protease are preferred codons in E-coli or used in the production of β-lactamase. Codon usage is also distributed evenly to ensure that one particular codon is not heavily used. The gene is essentially constructed from the outside in. The gene is built in a stepwise fashion using plasmids as the vehicles for the ∝-lytic oligomers. The use of plasmids allows the replication and isolation of large quantities of the intermediates during gene synthesis. The ∝-lytic DNA is a double-stranded oligomer that has sufficient overhang and sticky ends to anneal correctly in the vector. After six steps of incorporating ∝-lytic DNA, the gene is completed and sequenced to ensure that the correct DNA sequence is present and that no mutations occurred in the structural gene.

β-lactamase is the other serine hydrolase studied in this thesis. The author used the class A RTEM-1 β- lactamase encoded on the plasmid pBR322 to investigate the roll of the conserved threonine residue at position 71. Cassette mutagenesis was previously used to generate all possible amino acid substitutions at position 71. The work presented here describes the purification and kinetic characterization of a T71H mutant previously constructed by S. Schultz. The mutated gene was transferred into plasmid pJN for expression and induced with IPTG. The enzyme is purified by column chromatography and FPLC to homogeneity. Kinetic studies reveal that the mutant has lower k_(cat) values on benzylpenicillin, cephalothin and 6-aminopenicillanic acid but no changes in k_m except for cephalothin which is approximately 4 times higher. The mutant did not change siginificantly in its pH profile compared to the wild-type enzyme. Also, the mutant is more sensitive to thermal denaturation as compared to the wild-type enzyme. However, experimental evidence indicates that the probable generation of a positive charge at position 71 thermally stabilized the mutant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncovering the demographics of extrasolar planets is crucial to understanding the processes of their formation and evolution. In this thesis, we present four studies that contribute to this end, three of which relate to NASA's Kepler mission, which has revolutionized the field of exoplanets in the last few years.

In the pre-Kepler study, we investigate a sample of exoplanet spin-orbit measurements---measurements of the inclination of a planet's orbit relative to the spin axis of its host star---to determine whether a dominant planet migration channel can be identified, and at what confidence. Applying methods of Bayesian model comparison to distinguish between the predictions of several different migration models, we find that the data strongly favor a two-mode migration scenario combining planet-planet scattering and disk migration over a single-mode Kozai migration scenario. While we test only the predictions of particular Kozai and scattering migration models in this work, these methods may be used to test the predictions of any other spin-orbit misaligning mechanism.

We then present two studies addressing astrophysical false positives in Kepler data. The Kepler mission has identified thousands of transiting planet candidates, and only relatively few have yet been dynamically confirmed as bona fide planets, with only a handful more even conceivably amenable to future dynamical confirmation. As a result, the ability to draw detailed conclusions about the diversity of exoplanet systems from Kepler detections relies critically on understanding the probability that any individual candidate might be a false positive. We show that a typical a priori false positive probability for a well-vetted Kepler candidate is only about 5-10%, enabling confidence in demographic studies that treat candidates as true planets. We also present a detailed procedure that can be used to securely and efficiently validate any individual transit candidate using detailed information of the signal's shape as well as follow-up observations, if available.

Finally, we calculate an empirical, non-parametric estimate of the shape of the radius distribution of small planets with periods less than 90 days orbiting cool (less than 4000K) dwarf stars in the Kepler catalog. This effort reveals several notable features of the distribution, in particular a maximum in the radius function around 1-1.25 Earth radii and a steep drop-off in the distribution larger than 2 Earth radii. Even more importantly, the methods presented in this work can be applied to a broader subsample of Kepler targets to understand how the radius function of planets changes across different types of host stars.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The SCF ubiquitin ligase complex of budding yeast triggers DNA replication by cata lyzi ng ubiquitination of the S phase CDK inhibitor SIC1. SCF is composed of several evolutionarily conserved proteins, including ySKP1, CDC53 (Cullin), and the F-box protein CDC4. We isolated hSKP1 in a two-hybrid screen with hCUL1, the human homologue of CDC53. We showed that hCUL1 associates with hSKP1 in vivo and directly interacts with hSKP1 and the human F-box protein SKP2 in vitro, forming an SCF-Iike particle. Moreover, hCUL1 complements the growth defect of yeast CDC53^(ts) mutants, associates with ubiquitination-promoting activity in human cell extracts, and can assemble into functional, chimeric ubiquitin ligase complexes with yeast SCF components. These data demonstrated that hCUL1 functions as part of an SCF ubiquitin ligase complex in human cells. However, purified human SCF complexes consisting of CUL1, SKP1, and SKP2 are inactive in vitro, suggesting that additional factors are required.

Subsequently, mammalian SCF ubiquitin ligases were shown to regulate various physiological processes by targeting important cellular regulators, like lĸBα, β-catenin, and p27, for ubiquitin-dependent proteolysis by the 26S proteasome. Little, however, is known about the regulation of various SCF complexes. By using sequential immunoaffinity purification and mass spectrometry, we identified proteins that interact with human SCF components SKP2 and CUL1 in vivo. Among them we identified two additional SCF subunits: HRT1, present in all SCF complexes, and CKS1, that binds to SKP2 and is likely to be a subunit of SCF5^(SKP2) complexes. Subsequent work by others demonstrated that these proteins are essential for SCF activity. We also discovered that COP9 Signalosome (CSN), previously described in plants as a suppressor of photomorphogenesis, associates with CUL1 and other SCF subunits in vivo. This interaction is evolutionarily conserved and is also observed with other Cullins, suggesting that all Cullin based ubiquitin ligases are regulated by CSN. CSN regulates Cullin Neddylation presumably through CSNS/JAB1, a stochiometric Signalosome subunit and a putative deneddylating enzyme. This work sheds light onto an intricate connection that exists between signal transduction pathways and protein degradation machinery inside the cell and sets stage for gaining further insights into regulation of protein degradation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian probabilistic methodology for on-line structural health monitoring which addresses the issue of parameter uncertainty inherent in problem is presented. The method uses modal parameters for a limited number of modes identified from measurements taken at a restricted number of degrees of freedom of a structure as the measured structural data. The application presented uses a linear structural model whose stiffness matrix is parameterized to develop a class of possible models. Within the Bayesian framework, a joint probability density function (PDF) for the model stiffness parameters given the measured modal data is determined. Using this PDF, the marginal PDF of the stiffness parameter for each substructure given the data can be calculated.

Monitoring the health of a structure using these marginal PDFs involves two steps. First, the marginal PDF for each model parameter given modal data from the undamaged structure is found. The structure is then periodically monitored and updated marginal PDFs are determined. A measure of the difference between the calibrated and current marginal PDFs is used as a means to characterize the health of the structure. A procedure for interpreting the measure for use by an expert system in on-line monitoring is also introduced.

The probabilistic framework is developed in order to address the model parameter uncertainty issue inherent in the health monitoring problem. To illustrate this issue, consider a very simplified deterministic structural health monitoring method. In such an approach, the model parameters which minimize an error measure between the measured and model modal values would be used as the "best" model of the structure. Changes between the model parameters identified using modal data from the undamaged structure and subsequent modal data would be used to find the existence, location and degree of damage. Due to measurement noise, limited modal information, and model error, the "best" model parameters might vary from one modal dataset to the next without any damage present in the structure. Thus, difficulties would arise in separating normal variations in the identified model parameters based on limitations of the identification method and variations due to true change in the structure. The Bayesian framework described in this work provides a means to handle this parametric uncertainty.

The probabilistic health monitoring method is applied to simulated data and laboratory data. The results of these tests are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis summarizes the application of conventional and modern electron paramagnetic resonance (EPR) techniques to establish proximity relationships between paramagnetic metal centers in metalloproteins and between metal centers and magnetic ligand nuclei in two important and timely membrane proteins: succinate:ubiquinone oxidoreductase (SQR) from Paracoccus denitrificans and particulate methane monooxygenase (pMMO) from Methylococcus capsulatus. Such proximity relationships are thought to be critical to the biological function and the associated biochemistry mediated by the metal centers in these proteins. A mechanistic understanding of biological function relies heavily on structure-function relationships and the knowledge of how molecular structure and electronic properties of the metal centers influence the reactivity in metalloenzymes. EPR spectroscopy has proven to be one of the most powerful techniques towards obtaining information about interactions between metal centers as well as defining ligand structures. SQR is an electron transport enzyme wherein the substrates, organic and metallic cofactors are held relatively far apart. Here, the proximity relationships of the metallic cofactors were studied through their weak spin-spin interactions by means of EPR power saturation and electron spin-lattice (T_1) measurements, when the enzyme was poised at designated reduction levels. Analysis of the electron T_1 measurements for the S-3 center when the b-heme is paramagnetic led to a detailed analysis of the dipolar interactions and distance determination between two interacting metal centers. Studies of ligand environment of the metal centers by electron spin echo envelope modulation (ESEEM) spectroscopy resulted in the identication of peptide nitrogens as coupled nuclei in the environment of the S-1 and S-3 centers.

Finally, an EPR model was developed to describe the ferromagnetically coupled trinuclear copper clusters in pMMO when the enzyme is oxidized. The Cu(II) ions in these clusters appear to be strongly exchange coupled, and the EPR is consistent with equilateral triangular arrangements of type 2 copper ions. These results offer the first glimpse of the magneto-structural correlations for a trinuclear copper cluster of this type, which, until the work on pMMO, has had no precedent in the metalloprotein literature. Such trinuclear copper clusters are even rare in synthetic models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding how transcriptional regulatory sequence maps to regulatory function remains a difficult problem in regulatory biology. Given a particular DNA sequence for a bacterial promoter region, we would like to be able to say which transcription factors bind there, how strongly they bind, and whether they interact with each other and/or RNA polymerase, with the ultimate objective of integrating knowledge of these parameters into a prediction of gene expression levels. The theoretical framework of statistical thermodynamics provides a useful framework for doing so, enabling us to predict how gene expression levels depend on transcription factor binding energies and concentrations. We used thermodynamic models, coupled with models of the sequence-dependent binding energies of transcription factors and RNAP, to construct a genotype to phenotype map for the level of repression exhibited by the lac promoter, and tested it experimentally using a set of promoter variants from E. coli strains isolated from different natural environments. For this work, we sought to ``reverse engineer'' naturally occurring promoter sequences to understand how variations in promoter sequence affects gene expression. The natural inverse of this approach is to ``forward engineer'' promoter sequences to obtain targeted levels of gene expression. We used a high precision model of RNAP-DNA sequence dependent binding energy, coupled with a thermodynamic model relating binding energy to gene expression, to predictively design and verify a suite of synthetic E. coli promoters whose expression varied over nearly three orders of magnitude.

However, although thermodynamic models enable predictions of mean levels of gene expression, it has become evident that cell-to-cell variability or ``noise'' in gene expression can also play a biologically important role. In order to address this aspect of gene regulation, we developed models based on the chemical master equation framework and used them to explore the noise properties of a number of common E. coli regulatory motifs; these properties included the dependence of the noise on parameters such as transcription factor binding strength and copy number. We then performed experiments in which these parameters were systematically varied and measured the level of variability using mRNA FISH. The results showed a clear dependence of the noise on these parameters, in accord with model predictions.

Finally, one shortcoming of the preceding modeling frameworks is that their applicability is largely limited to systems that are already well-characterized, such as the lac promoter. Motivated by this fact, we used a high throughput promoter mutagenesis assay called Sort-Seq to explore the completely uncharacterized transcriptional regulatory DNA of the E. coli mechanosensitive channel of large conductance (MscL). We identified several candidate transcription factor binding sites, and work is continuing to identify the associated proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spin dependent cross sections, σT1/2 and σT3/2 , and asymmetries, A and A for 3He have been measured at the Jefferson Lab's Hall A facility. The inclusive scattering process 3He(e,e)X was performed for initial beam energies ranging from 0.86 to 5.1 GeV, at a scattering angle of 15.5°. Data includes measurements from the quasielastic peak, resonance region, and the deep inelastic regime. An approximation for the extended Gerasimov-Drell-Hearn integral is presented at a 4-momentum transfer Q2 of 0.2-1.0 GeV2.

Also presented are results on the performance of the polarized 3He target. Polarization of 3He was achieved by the process of spin-exchange collisions with optically pumped rubidium vapor. The 3He polarization was monitored using the NMR technique of adiabatic fast passage (AFP). The average target polarization was approximately 35% and was determined to have a systematic uncertainty of roughly ±4% relative.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I. Novel composite polyelectrolyte materials were developed that exhibit desirable charge propagation and ion-retention properties. The morphology of electrode coatings cast from these materials was shown to be more important for its electrochemical behavior than its chemical composition.

Part II. The Wilhelmy plate technique for measuring dynamic surface tension was extended to electrified liquid-liquid interphases. The dynamical response of the aqueous NaF-mercury electrified interphase was examined by concomitant measurement of surface tension, current, and applied electrostatic potential. Observations of the surface tension response to linear sweep voltammetry and to step function perturbations in the applied electrostatic potential (e.g., chronotensiometry) provided strong evidence that relaxation processes proceed for time-periods that are at least an order of magnitude longer than the time periods necessary to establish diffusion equilibrium. The dynamical response of the surface tension is analyzed within the context of non-equilibrium thermodynamics and a kinetic model that requires three simultaneous first order processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part I

Regression analyses are performed on in vivo hemodialysis data for the transfer of creatinine, urea, uric acid and inorganic phosphate to determine the effects of variations in certain parameters on the efficiency of dialysis with a Kiil dialyzer. In calculating the mass transfer rates across the membrane, the effects of cell-plasma mass transfer kinetics are considered. The concept of the effective permeability coefficient for the red cell membrane is introduced to account for these effects. A discussion of the consequences of neglecting cell-plasma kinetics, as has been done to date in the literature, is presented.

A physical model for the Kiil dialyzer is presented in order to calculate the available membrane area for mass transfer, the linear blood and dialysate velocities, and other variables. The equations used to determine the independent variables of the regression analyses are presented. The potential dependent variables in the analyses are discussed.

Regression analyses were carried out considering overall mass-transfer coefficients, dialysances, relative dialysances, and relative permeabilities for each substance as the dependent variables. The independent variables were linear blood velocity, linear dialysate velocity, the pressure difference across the membrane, the elapsed time of dialysis, the blood hematocrit, and the arterial plasma concentrations of each substance transferred. The resulting correlations are tabulated, presented graphically, and discussed. The implications of these correlations are discussed from the viewpoint of a research investigator and from the viewpoint of patient treatment.

Recommendations for further experimental work are presented.

Part II

The interfacial structure of concurrent air-water flow in a two-inch diameter horizontal tube in the wavy flow regime has been measured using resistance wave gages. The median water depth, r.m.s. wave height, wave frequency, extrema frequency, and wave velocity have been measured as functions of air and water flow rates. Reynolds numbers, Froude numbers, Weber numbers, and bulk velocities for each phase may be calculated from these measurements. No theory for wave formation and propagation available in the literature was sufficient to describe these results.

The water surface level distribution generally is not adequately represented as a stationary Gaussian process. Five types of deviation from the Gaussian process function were noted in this work. The presence of the tube walls and the relatively large interfacial shear stresses precludes the use of simple statistical analyses to describe the interfacial structure. A detailed study of the behavior of individual fluid elements near the interface may be necessary to describe adequately wavy two-phase flow in systems similar to the one used in this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planetary atmospheres exist in a seemingly endless variety of physical and chemical environments. There are an equally diverse number of methods by which we can study and characterize atmospheric composition. In order to better understand the fundamental chemistry and physical processes underlying all planetary atmospheres, my research of the past four years has focused on two distinct topics. First, I focused on the data analysis and spectral retrieval of observations obtained by the Ultraviolet Imaging Spectrograph (UVIS) instrument onboard the Cassini spacecraft while in orbit around Saturn. These observations consisted of stellar occultation measurements of Titan's upper atmosphere, probing the chemical composition in the region 300 to 1500 km above Titan's surface. I examined the relative abundances of Titan's two most prevalent chemical species, nitrogen and methane. I also focused on the aerosols that are formed through chemistry involving these two major species, and determined the vertical profiles of aerosol particles as a function of time and latitude. Moving beyond our own solar system, my second topic of investigation involved analysis of infra-red light curves from the Spitzer space telescope, obtained as it measured the light from stars hosting planets of their own. I focused on both transit and eclipse modeling during Spitzer data reduction and analysis. In my initial work, I utilized the data to search for transits of planets a few Earth masses in size. In more recent research, I analyzed secondary eclipses of three exoplanets and constrained the range of possible temperatures and compositions of their atmospheres.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soft hierarchical materials often present unique functional properties that are sensitive to the geometry and organization of their micro- and nano-structural features across different lengthscales. Carbon Nanotube (CNT) foams are hierarchical materials with fibrous morphology that are known for their remarkable physical, chemical and electrical properties. Their complex microstructure has led them to exhibit intriguing mechanical responses at different length-scales and in different loading regimes. Even though these materials have been studied for mechanical behavior over the past few years, their response at high-rate finite deformations and the influence of their microstructure on bulk mechanical behavior and energy dissipative characteristics remain elusive.

In this dissertation, we study the response of aligned CNT foams at the high strain-rate regime of 102 - 104 s-1. We investigate their bulk dynamic response and the fundamental deformation mechanisms at different lengthscales, and correlate them to the microstructural characteristics of the foams. We develop an experimental platform, with which to study the mechanics of CNT foams in high-rate deformations, that includes direct measurements of the strain and transmitted forces, and allows for a full field visualization of the sample’s deformation through high-speed microscopy.

We synthesize various CNT foams (e.g., vertically aligned CNT (VACNT) foams, helical CNT foams, micro-architectured VACNT foams and VACNT foams with microscale heterogeneities) and show that the bulk functional properties of these materials are highly tunable either by tailoring their microstructure during synthesis or by designing micro-architectures that exploit the principles of structural mechanics. We also develop numerical models to describe the bulk dynamic response using multiscale mass-spring models and identify the mechanical properties at length scales that are smaller than the sample height.

The ability to control the geometry of microstructural features, and their local interactions, allows the creation of novel hierarchical materials with desired functional properties. The fundamental understanding provided by this work on the key structure-function relations that govern the bulk response of CNT foams can be extended to other fibrous, soft and hierarchical materials. The findings can be used to design materials with tailored properties for different engineering applications, like vibration damping, impact mitigation and packaging.