17 resultados para extended techniques

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of the strength of a material is relevant to a variety of applications including automobile collisions, armor penetration and inertial confinement fusion. Although dynamic behavior of materials at high pressures and strain-rates has been studied extensively using plate impact experiments, the results provide measurements in one direction only. Material behavior that is dependent on strength is unaccounted for. The research in this study proposes two novel configurations to mitigate this problem.

The first configuration introduced is the oblique wedge experiment, which is comprised of a driver material, an angled target of interest and a backing material used to measure in-situ velocities. Upon impact, a shock wave is generated in the driver material. As the shock encounters the angled target, it is reflected back into the driver and transmitted into the target. Due to the angle of obliquity of the incident wave, a transverse wave is generated that allows the target to be subjected to shear while being compressed by the initial longitudinal shock such that the material does not slip. Using numerical simulations, this study shows that a variety of oblique wedge configurations can be used to study the shear response of materials and this can be extended to strength measurement as well. Experiments were performed on an oblique wedge setup with a copper impactor, polymethylmethacrylate driver, aluminum 6061-t6 target, and a lithium fluoride window. Particle velocities were measured using laser interferometry and results agree well with the simulations.

The second novel configuration is the y-cut quartz sandwich design, which uses the anisotropic properties of y-cut quartz to generate a shear wave that is transmitted into a thin sample. By using an anvil material to back the thin sample, particle velocities measured at the rear surface of the backing plate can be implemented to calculate the shear stress in the material and subsequently the strength. Numerical simulations were conducted to show that this configuration has the ability to measure the strength for a variety of materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Part I a class of linear boundary value problems is considered which is a simple model of boundary layer theory. The effect of zeros and singularities of the coefficients of the equations at the point where the boundary layer occurs is considered. The usual boundary layer techniques are still applicable in some cases and are used to derive uniform asymptotic expansions. In other cases it is shown that the inner and outer expansions do not overlap due to the presence of a turning point outside the boundary layer. The region near the turning point is described by a two-variable expansion. In these cases a related initial value problem is solved and then used to show formally that for the boundary value problem either a solution exists, except for a discrete set of eigenvalues, whose asymptotic behaviour is found, or the solution is non-unique. A proof is given of the validity of the two-variable expansion; in a special case this proof also demonstrates the validity of the inner and outer expansions.

Nonlinear dispersive wave equations which are governed by variational principles are considered in Part II. It is shown that the averaged Lagrangian variational principle is in fact exact. This result is used to construct perturbation schemes to enable higher order terms in the equations for the slowly varying quantities to be calculated. A simple scheme applicable to linear or near-linear equations is first derived. The specific form of the first order correction terms is derived for several examples. The stability of constant solutions to these equations is considered and it is shown that the correction terms lead to the instability cut-off found by Benjamin. A general stability criterion is given which explicitly demonstrates the conditions under which this cut-off occurs. The corrected set of equations are nonlinear dispersive equations and their stationary solutions are investigated. A more sophisticated scheme is developed for fully nonlinear equations by using an extension of the Hamiltonian formalism recently introduced by Whitham. Finally the averaged Lagrangian technique is extended to treat slowly varying multiply-periodic solutions. The adiabatic invariants for a separable mechanical system are derived by this method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Redox-active ruthenium complexes have been covalently attached to the surface of a series of natural, semisynthetic and recombinant cytochromes c. The protein derivatives were characterized by a variety of spectroscopic techniques. Distant Fe^(2+) - Ru^(3+) electronic couplings were extracted from intramolecular electron-transfer rates in Ru(bpy)_2(im)HisX (where X= 33, 39, 62, and 72) derivatives of cyt c. The couplings increase according to 62 (0.0060) < 72 (0.057) < 33 (0.097) < 39 (0.11 cm^(-1)); however, this order is incongruent with histidine to heme edge-edge distances [62 (14.8) > 39 (12.3) > 33 (11.1) > =72 (8.4 Å)]. These results suggest the chemical nature of the intervening medium needs to be considered for a more precise evaluation of couplings. The rates (and couplings) correlate with the lengths of a-tunneling pathways comprised of covalent bonds, hydrogen bonds and through-space jumps from the histidines to the heme group. Space jumps greatly decrease couplings: one from Pro71 to Met80 extends the σ-tunneling length of the His72 pathway by roughly 10 covalent bond units. Experimental couplings also correlate well with those calculated using extended Hiickel theory to evaluate the contribution of the intervening protein medium.

Two horse heart cyt c variants incorporating the unnatural amino acids (S)-2- amino-3-(2,2'-bipyrid-6-yl)-propanoic acid (6Bpa) and (S)-2-amino-3-(2,2'-bipyrid-4-yl)propanoic acid ( 4Bpa) at position 72 have been prepared using semisynthetic protocols. Negligible perturbation of the protein structure results from this introduction of unnatural amino acids. Redox-active Ru(2,2'-bipyridine)_2^(2+) binds to 4Bpa72 cyt c but not to the 6Bpa protein. Enhanced ET rates were observed in the Ru(bpy)_2^(2+)-modified 4Bpa72 cyt c relative to the analogous His72 derivative. The rapid (< 60 nanosecond) photogeneration of ferrous Ru-modified 4Bpa72 cyt c in the conformationally altered alkaline state demonstrates that laser-induced ET can be employed to study submicrosecond protein-folding events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of eight related analogs of distamycin A has been synthesized. Footprinting and affinity cleaving reveal that only two of the analogs, pyridine-2- car box amide-netropsin (2-Py N) and 1-methylimidazole-2-carboxamide-netrops in (2-ImN), bind to DNA with a specificity different from that of the parent compound. A new class of sites, represented by a TGACT sequence, is a strong site for 2-PyN binding, and the major recognition site for 2-ImN on DNA. Both compounds recognize the G•C bp specifically, although A's and T's in the site may be interchanged without penalty. Additional A•T bp outside the binding site increase the binding affinity. The compounds bind in the minor groove of the DNA sequence, but protect both grooves from dimethylsulfate. The binding evidence suggests that 2-PyN or 2-ImN binding induces a DNA conformational change.

In order to understand this sequence specific complexation better, the Ackers quantitative footprinting method for measuring individual site affinity constants has been extended to small molecules. MPE•Fe(II) cleavage reactions over a 10^5 range of free ligand concentrations are analyzed by gel electrophoresis. The decrease in cleavage is calculated by densitometry of a gel autoradiogram. The apparent fraction of DNA bound is then calculated from the amount of cleavage protection. The data is fitted to a theoretical curve using non-linear least squares techniques. Affinity constants at four individual sites are determined simultaneously. The distamycin A analog binds solely at A•T rich sites. Affinities range from 10^(6)- 10^(7)M^(-1) The data for parent compound D fit closely to a monomeric binding curve. 2-PyN binds both A•T sites and the TGTCA site with an apparent affinity constant of 10^(5) M^(-1). 2-ImN binds A•T sites with affinities less than 5 x 10^(4) M^(-1). The affinity of 2-ImN for the TGTCA site does not change significantly from the 2-PyN value. At the TGTCA site, the experimental data fit a dimeric binding curve better than a monomeric curve. Both 2-PyN and 2-ImN have substantially lower DNA affinities than closely related compounds.

In order to probe the requirements of this new binding site, fourteen other derivatives have been synthesized and tested. All compounds that recognize the TGTCA site have a heterocyclic aromatic nitrogen ortho to the N or C-terminal amide of the netropsin subunit. Specificity is strongly affected by the overall length of the small molecule. Only compounds that consist of at least three aromatic rings linked by amides exhibit TGTCA site binding. Specificity is only weakly altered by substitution on the pyridine ring, which correlates best with steric factors. A model is proposed for TGTCA site binding that has as its key feature hydrogen bonding to both G's by the small molecule. The specificity is determined by the sequence dependence of the distance between G's.

One derivative of 2-PyN exhibits pH dependent sequence specificity. At low pH, 4-dimethylaminopyridine-2-carboxamide-netropsin binds tightly to A•T sites. At high pH, 4-Me_(2)NPyN binds most tightly to the TGTCA site. In aqueous solution, this compound protonates at the pyridine nitrogen at pH 6. Thus presence of the protonated form correlates with A•T specificity.

The binding site of a class of eukaryotic transcriptional activators typified by yeast protein GCN4 and the mammalian oncogene Jun contains a strong 2-ImN binding site. Specificity requirements for the protein and small molecule are similar. GCN4 and 2-lmN bind simultaneously to the same binding site. GCN4 alters the cleavage pattern of 2-ImN-EDTA derivative at only one of its binding sites. The details of the interaction suggest that GCN4 alters the conformation of an AAAAAAA sequence adjacent to its binding site. The presence of a yeast counterpart to Jun partially blocks 2-lmN binding. The differences do not appear to be caused by direct interactions between 2-lmN and the proteins, but by induced conformational changes in the DNA protein complex. It is likely that the observed differences in complexation are involved in the varying sequence specificity of these proteins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metallic glasses have typically been treated as a “one size fits all” type of material. Every alloy is considered to have high strength, high hardness, large elastic limits, corrosion resistance, etc. However, similar to traditional crystalline materials, properties are strongly dependent upon the constituent elements, how it was processed, and the conditions under which it will be used. An important distinction which can be made is between metallic glasses and their composites. Charpy impact toughness measurements are performed to determine the effect processing and microstructure have on bulk metallic glass matrix composites (BMGMCs). Samples are suction cast, machined from commercial plates, and semi-solidly forged (SSF). The SSF specimens have been found to have the highest impact toughness due to the coarsening of the dendrites, which occurs during the semi-solid processing stages. Ductile to brittle transition (DTBT) temperatures are measured for a BMGMC. While at room temperature the BMGMC is highly toughened compared to a fully glassy alloy, it undergoes a DTBT by 250 K. At this point, its impact toughness mirrors that of the constituent glassy matrix. In the following chapter, BMGMCs are shown to have the capability of being capacitively welded to form single, monolithic structures. Shear measurements are performed across welded samples, and, at sufficient weld energies, are found to retain the strength of the parent alloy. Cross-sections are inspected via SEM and no visible crystallization of the matrix occurs.

Next, metallic glasses and BMGMCs are formed into sheets and eggbox structures are tested in hypervelocity impacts. Metallic glasses are ideal candidates for protection against micrometeorite orbital debris due to their high hardness and relatively low density. A flat single layer, flat BMG is compared to a BMGMC eggbox and the latter creates a more diffuse projectile cloud after penetration. A three tiered eggbox structure is also tested by firing a 3.17 mm aluminum sphere at 2.7 km/s at it. The projectile penetrates the first two layers, but is successfully contained by the third.

A large series of metallic glass alloys are created and their wear loss is measured in a pin on disk test. Wear is found to vary dramatically among different metallic glasses, with some considerably outperforming the current state-of-the-art crystalline material (most notably Cu₄₃Zr₄₃Al₇Be₇). Others, on the other hand, suffered extensive wear loss. Commercially available Vitreloy 1 lost nearly three times as much mass in wear as alloy prepared in a laboratory setting. No conclusive correlations can be found between any set of mechanical properties (hardness, density, elastic, bulk, or shear modulus, Poisson’s ratio, frictional force, and run in time) and wear loss. Heat treatments are performed on Vitreloy 1 and Cu₄₃Zr₄₃Al₇Be₇. Anneals near the glass transition temperature are found to increase hardness slightly, but decrease wear loss significantly. Crystallization of both alloys leads to dramatic increases in wear resistance. Finally, wear tests under vacuum are performed on the two alloys above. Vitreloy 1 experiences a dramatic decrease in wear loss, while Cu₄₃Zr₄₃Al₇Be₇ has a moderate increase. Meanwhile, gears are fabricated through three techniques: electrical discharge machining of 1 cm by 3 mm cylinders, semisolid forging, and copper mold suction casting. Initial testing finds the pin on disk test to be an accurate predictor of wear performance in gears.

The final chapter explores an exciting technique in the field of additive manufacturing. Laser engineered net shaping (LENS) is a method whereby small amounts of metallic powders are melted by a laser such that shapes and designs can be built layer by layer into a final part. The technique is extended to mixing different powders during melting, so that compositional gradients can be created across a manufactured part. Two compositional gradients are fabricated and characterized. Ti 6Al¬ 4V to pure vanadium was chosen for its combination of high strength and light weight on one end, and high melting point on the other. It was inspected by cross-sectional x-ray diffraction, and only the anticipated phases were present. 304L stainless steel to Invar 36 was created in both pillar and as a radial gradient. It combines strength and weldability along with a zero coefficient of thermal expansion material. Only the austenite phase is found to be present via x-ray diffraction. Coefficient of thermal expansion is measured for four compositions, and it is found to be tunable depending on composition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To obtain accurate information from a structural tool it is necessary to have an understanding of the physical principles which govern the interaction between the probe and the sample under investigation. In this thesis a detailed study of the physical basis for Extended X-ray Absorption Fine Structure (EXAFS) spectroscopy is presented. A single scattering formalism of EXAFS is introduced which allows a rigorous treatment of the central atom potential. A final state interaction formalism of EXAFS is also discussed. Multiple scattering processes are shown to be significant for systems of certain geometries. The standard single scattering EXAFS analysis produces erroneous results if the data contain a large multiple scattering contribution. The effect of thermal vibrations on such multiple scattering paths is also discussed. From symmetry considerations it is shown that only certain normal modes contribute to the Debye-Waller factor for a particular scattering path. Furthermore, changes in the scattering angles induced by thermal vibrations produces additional EXAFS components called modification factors. These factors are shown to be small for most systems.

A study of the physical basis for the determination of structural information from EXAFS data is also presented. An objective method of determining the background absorption and the threshold energy is discussed and involves Gaussian functions. In addition, a scheme to determine the nature of the scattering atom in EXAFS experiments is introduced. This scheme is based on the fact that the phase intercept is a measure of the type of scattering atom. A method to determine bond distances is also discussed and does not require the use of model compounds or calculated phase shifts. The physical basis for this method is the absence of a linear term in the scattering phases. Therefore, it is possible to separate these phases from the linear term containing the distance information in the total phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.

All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.

We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. The effect of 2,2’-bis-[α-(trimethylammonium)methyl]azobenzene (2BQ), a photoisomerizable competitive antagonist, was studied at the nicotinic acetycholine receptor of Electrophorus electroplaques using voltage-jump and light-flash techniques.

2. 2BQ, at concentrations below 3 μΜ, reduced the amplitude of voltage-jump relaxations but had little effect on the voltage-jump relaxation time constants under all experimental conditions. At higher concentrations and voltages more negative than -150 mV, 2BQ caused significant open channel blockade.

3. Dose-ratio studies showed that the cis and trans isomers of 2BQ have equilibrium binding constants (K) of .33 and 1.0 μΜ, respectively. The binding constants determined for both isomers are independent of temperature, voltage, agonist concentration, and the nature of the agonist.

4. In a solution of predominantly cis-2BQ, visible-light flashes led to a net cis→trans isomerization and caused an increase in the agonist-induced current. This increase had at least two exponential components; the larger amplitude component had the same time constant as a subsequent voltage-jump relaxation; the smaller amplitude component was investigated using ultraviolet light flashes.

5. In a solution of predominantly trans-2BQ, UV-light flashes led to a net trans→cis isomerization and caused a net decrease in the agonist-induced current. This effect had at least two exponential components. The smaller and faster component was an increase in agonist-induced current and had a similar time constant to the voltage-jump relaxation. The larger component was a slow decrease in the agonist-induced current with rate constant approximately an order of magnitude less than that of the voltage-jump relaxation. This slow component provided a measure of the rate constant for dissociation of cis-2BQ (k_ = 60/s at 20°C). Simple modelling of the slope of the dose-rate curves yields an association rate constant of 1.6 x 108/M/s. This agrees with the association rate constant of 1.8 x 108/M/s estimated from the binding constant (Ki). The Q10 of the dissociation rate constant of cis-2BQ was 3.3 between 6° and 20°C. The rate constants for association and dissociation of cis-28Q at receptors are independent of voltage, agonist concentration, and the nature of the agonist.

6. We have measured the molecular rate constants of a competitive antagonist which has roughly the same K as d-tubocurarine but interacts more slowly with the receptor. This leads to the conclusion that curare itself has an association rate constant of 4 x 109/M/s or roughly as fast as possible for an encounter-limited reaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Understanding the roles of microorganisms in environmental settings by linking phylogenetic identity to metabolic function is a key challenge in delineating their broad-scale impact and functional diversity throughout the biosphere. This work addresses and extends such questions in the context of marine methane seeps, which represent globally relevant conduits for an important greenhouse gas. Through the application and development of a range of culture-independent tools, novel habitats for methanotrophic microbial communities were identified, established settings were characterized in new ways, and potential past conditions amenable to methane-based metabolism were proposed. Biomass abundance and metabolic activity measures – both catabolic and anabolic – demonstrated that authigenic carbonates associated with seep environments retain methanotrophic activity, not only within high-flow seep settings but also in adjacent locations exhibiting no visual evidence of chemosynthetic communities. Across this newly extended habitat, microbial diversity surveys revealed archaeal assemblages that were shaped primarily by seepage activity level and bacterial assemblages influenced more substantially by physical substrate type. In order to reliably measure methane consumption rates in these and other methanotrophic settings, a novel method was developed that traces deuterium atoms from the methane substrate into aqueous medium and uses empirically established scaling factors linked to radiotracer rate techniques to arrive at absolute methane consumption values. Stable isotope probing metaproteomic investigations exposed an array of functional diversity both within and beyond methane oxidation- and sulfate reduction-linked metabolisms, identifying components of each proposed enzyme in both pathways. A core set of commonly occurring unannotated protein products was identified as promising targets for future biochemical investigation. Physicochemical and energetic principles governing anaerobic methane oxidation were incorporated into a reaction transport model that was applied to putative settings on ancient Mars. Many conditions enabled exergonic model reactions, marking the metabolism and its attendant biomarkers as potentially promising targets for future astrobiological investigations. This set of inter-related investigations targeting methane metabolism extends the known and potential habitat of methanotrophic microbial communities and provides a more detailed understanding of their activity and functional diversity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Huntington’s disease (HD) is a fatal autosomal dominant neurodegenerative disease. HD has no cure, and patients pass away 10-20 years after the onset of symptoms. The causal mutation for HD is a trinucleotide repeat expansion in exon 1 of the huntingtin gene that leads to a polyglutamine (polyQ) repeat expansion in the N-terminal region of the huntingtin protein. Interestingly, there is a threshold of 37 polyQ repeats under which little or no disease exists; and above which, patients invariably show symptoms of HD. The huntingtin protein is a 350 kDa protein with unclear function. As the polyQ stretch expands, its propensity to aggregate increases with polyQ length. Models for polyQ toxicity include formation of aggregates that recruit and sequester essential cellular proteins, or altered function producing improper interactions between mutant huntingtin and other proteins. In both models, soluble expanded polyQ may be an intermediate state that can be targeted by potential therapeutics.

In the first study described herein, the conformation of soluble, expanded polyQ was determined to be linear and extended using equilibrium gel filtration and small-angle X-ray scattering. While attempts to purify and crystallize domains of the huntingtin protein were unsuccessful, the aggregation of huntingtin exon 1 was investigated using other biochemical techniques including dynamic light scattering, turbidity analysis, Congo red staining, and thioflavin T fluorescence. Chapter 4 describes crystallization experiments sent to the International Space Station and determination of the X-ray crystal structure of the anti-polyQ Fab MW1. In the final study, multimeric fibronectin type III (FN3) domain proteins were engineered to bind with high avidity to expanded polyQ tracts in mutant huntingtin exon 1. Surface plasmon resonance was used to observe binding of monomeric and multimeric FN3 proteins with huntingtin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of mechanics, it is a long standing goal to measure quantum behavior in ever larger and more massive objects. It may now seem like an obvious conclusion, but until recently it was not clear whether a macroscopic mechanical resonator -- built up from nearly 1013 atoms -- could be fully described as an ideal quantum harmonic oscillator. With recent advances in the fields of opto- and electro-mechanics, such systems offer a unique advantage in probing the quantum noise properties of macroscopic electrical and mechanical devices, properties that ultimately stem from Heisenberg's uncertainty relations. Given the rapid progress in device capabilities, landmark results of quantum optics are now being extended into the regime of macroscopic mechanics.

The purpose of this dissertation is to describe three experiments -- motional sideband asymmetry, back-action evasion (BAE) detection, and mechanical squeezing -- that are directly related to the topic of measuring quantum noise with mechanical detection. These measurements all share three pertinent features: they explore quantum noise properties in a macroscopic electromechanical device driven by a minimum of two microwave drive tones, hence the title of this work: "Quantum electromechanics with two tone drive".

In the following, we will first introduce a quantum input-output framework that we use to model the electromechanical interaction and capture subtleties related to interpreting different microwave noise detection techniques. Next, we will discuss the fabrication and measurement details that we use to cool and probe these devices with coherent and incoherent microwave drive signals. Having developed our tools for signal modeling and detection, we explore the three-wave mixing interaction between the microwave and mechanical modes, whereby mechanical motion generates motional sidebands corresponding to up-down frequency conversions of microwave photons. Because of quantum vacuum noise, the rates of these processes are expected to be unequal. We will discuss the measurement and interpretation of this asymmetric motional noise in a electromechanical device cooled near the ground state of motion.

Next, we consider an overlapped two tone pump configuration that produces a time-modulated electromechanical interaction. By careful control of this drive field, we report a quantum non-demolition (QND) measurement of a single motional quadrature. Incorporating a second pair of drive tones, we directly measure the measurement back-action associated with both classical and quantum noise of the microwave cavity. Lastly, we slightly modify our drive scheme to generate quantum squeezing in a macroscopic mechanical resonator. Here, we will focus on data analysis techniques that we use to estimate the quadrature occupations. We incorporate Bayesian spectrum fitting and parameter estimation that serve as powerful tools for incorporating many known sources of measurement and fit error that are unavoidable in such work.