32 resultados para powerful owl

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

DNA recognition is an essential biological process responsible for the regulation of cellular functions including protein synthesis and cell division and is implicated in the mechanism of action of some anticancer drugs. Studies directed towards defining the elements responsible for sequence specific DNA recognition through the study of the interactions of synthetic organic ligands with DNA are described.

DNA recognition by poly-N-methylpyrrolecarboxamides was studied by the synthesis and characterization of a series of molecules where the number of contiguous N-methylpyrrolecarboxamide units was increased from 2 to 9. The effect of this incremental change in structure on DNA recognition has been investigated at base pair resolution using affinity cleaving and MPE•Fe(II) footprinting techniques. These studies led to a quantitative relationship between the number of amides in the molecule and the DNA binding site size. This relationship is called the n + 1 rule and it states that a poly-N methylpyrrolecarboxamide molecule with n amides will bind n + 1 base pairs of DNA. This rule is consistent with a model where the carboxamides of these compounds form three center bridging hydrogen bonds between adjacent base pairs on opposite strands of the helix. The poly-N methylpyrrolecarboxamide recognition element was found to preferentially bind poly dA•poly dT stretches; however, both binding site selection and orientation were found to be affected by flanking sequences. Cleavage of large DNA is also described.

One approach towards the design of molecules that bind large sequences of double helical DNA sequence specifically is to couple DNA binding subunits of similar or diverse base pair specificity. Bis-EDTA-distamycin-fumaramide (BEDF) is an octaamide dimer of two tri-N methylpyrrolecarboxamide subunits linked by fumaramide. DNA recognition by BEDF was compared to P7E, an octaamide molecule containing seven consecutive pyrroles. These two compounds were found to recognize the same sites on pBR322 with approximately the same affinities demonstrating that fumaramide is an effective linking element for Nmethylpyrrolecarboxamide recognition subunits. Further studies involved the synthesis and characterization of a trimer of tetra-N-methylpyrrolecarboxamide subunits linked by β-alanine ((P4)_(3)E). This trimerization produced a molecule which is capable of recognizing 16 base pairs of A•T DNA, more than a turn and a half of the DNA helix.

DNA footprinting is a powerful direct method for determining the binding sites of proteins and small molecules on heterogeneous DNA. It was found that attachment of EDTA•Fe(II) to spermine creates a molecule, SE•Fe(II), which binds and cleaves DNA sequence neutrally. This lack of specificity provides evidence that at the nucleotide level polyamines recognize heterogeneous DNA independent of sequence and allows SE•Fe(II) to be used as a footprinting reagent. SE•Fe(II) was compared with two other small molecule footprinting reagents, EDTA•Fe(II) and MPE•Fe(II).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The determination of the energy levels and the probabilities of transition between them, by the formal analysis of observed electronic, vibrational, and rotational band structures, forms the direct goal of all investigations of molecular spectra, but the significance of such data lies in the possibility of relating them theoretically to more concrete properties of molecules and the radiation field. From the well developed electronic spectra of diatomic molecules, it has been possible, with the aid of the non-relativistic quantum mechanics, to obtain accurate moments of inertia, molecular potential functions, electronic structures, and detailed information concerning the coupling of spin and orbital angular monenta with the angular momentum of nuclear rotation. The silicon fluori1e molecule has been investigated in this laboratory, and is found to emit bands whose vibrational and rotational structures can be analyzed in this detailed fashion.

Like silicon fluoride, however, the great majority of diatomic molecules are formed only under the unusual conditions of electrical discharge, or in high temperature furnaces, so that although their spectra are of great theoretical interest, the chemist is eager to proceed to a study of polyatomic molecules, in the hope that their more practically interesting structures might also be determined with the accuracy and assurance which characterize the spectroscopic determinations of the constants of diatomic molecules. Some progress has been made in the determination of molecule potential functions from the vibrational term values deduced from Raman and infrared spectra, but in no case can the calculations be carried out with great generality, since the number of known term values is always small compared with the total number of potential constants in even so restricted a potential function as the simple quadratic type. For the determination of nuclear configurations and bond distances, however, a knowledge of the rotational terms is required. The spectra of about twelve of the simpler polyatomic molecules have been subjected to rotational analyses, and a number of bond distances are known with considerable accuracy, yet the number of molecules whose rotational fine structure has been resolved even with the most powerful instruments is small. Consequently, it was felt desirable to investigate the spectra of a number of other promising polyatomic molecules, with the purpose of carrying out complete rotational analyses of all resolvable bands, and ascertaining the value of the unresolved band envelopes in determining the structures of such molecules, in the cases in which resolution is no longer possible. Although many of the compounds investigated absorbed too feebly to be photographed under high dispersion with the present infrared sensitizations, the location and relative intensities of their bands, determined by low dispersion measurements, will be reported in the hope that these compounds may be reinvestigated in the future with improved techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel spectroscopy of trapped ions is proposed which will bring single-ion detection sensitivity to the observation of magnetic resonance spectra. The approaches developed here are aimed at resolving one of the fundamental problems of molecular spectroscopy, the apparent incompatibility in existing techniques between high information content (and therefore good species discrimination) and high sensitivity. Methods for studying both electron spin resonance (ESR) and nuclear magnetic resonance (NMR) are designed. They assume established methods for trapping ions in high magnetic field and observing the trapping frequencies with high resolution (<1 Hz) and sensitivity (single ion) by electrical means. The introduction of a magnetic bottle field gradient couples the spin and spatial motions together and leads to a small spin-dependent force on the ion, which has been exploited by Dehmelt to observe directly the perturbation of the ground-state electron's axial frequency by its spin magnetic moment.

A series of fundamental innovations is described m order to extend magnetic resonance to the higher masses of molecular ions (100 amu = 2x 10^5 electron masses) and smaller magnetic moments (nuclear moments = 10^(-3) of the electron moment). First, it is demonstrated how time-domain trapping frequency observations before and after magnetic resonance can be used to make cooling of the particle to its ground state unnecessary. Second, adiabatic cycling of the magnetic bottle off between detection periods is shown to be practical and to allow high-resolution magnetic resonance to be encoded pointwise as the presence or absence of trapping frequency shifts. Third, methods of inducing spindependent work on the ion orbits with magnetic field gradients and Larmor frequency irradiation are proposed which greatly amplify the attainable shifts in trapping frequency.

The dissertation explores the basic concepts behind ion trapping, adopting a variety of classical, semiclassical, numerical, and quantum mechanical approaches to derive spin-dependent effects, design experimental sequences, and corroborate results from one approach with those from another. The first proposal presented builds on Dehmelt's experiment by combining a "before and after" detection sequence with novel signal processing to reveal ESR spectra. A more powerful technique for ESR is then designed which uses axially synchronized spin transitions to perform spin-dependent work in the presence of a magnetic bottle, which also converts axial amplitude changes into cyclotron frequency shifts. A third use of the magnetic bottle is to selectively trap ions with small initial kinetic energy. A dechirping algorithm corrects for undesired frequency shifts associated with damping by the measurement process.

The most general approach presented is spin-locked internally resonant ion cyclotron excitation, a true continuous Stern-Gerlach effect. A magnetic field gradient modulated at both the Larmor and cyclotron frequencies is devised which leads to cyclotron acceleration proportional to the transverse magnetic moment of a coherent state of the particle and radiation field. A preferred method of using this to observe NMR as an axial frequency shift is described in detail. In the course of this derivation, a new quantum mechanical description of ion cyclotron resonance is presented which is easily combined with spin degrees of freedom to provide a full description of the proposals.

Practical, technical, and experimental issues surrounding the feasibility of the proposals are addressed throughout the dissertation. Numerical ion trajectory simulations and analytical models are used to predict the effectiveness of the new designs as well as their sensitivity and resolution. These checks on the methods proposed provide convincing evidence of their promise in extending the wealth of magnetic resonance information to the study of collisionless ions via single-ion spectroscopy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nucleic acids are most commonly associated with the genetic code, transcription and gene expression. Recently, interest has grown in engineering nucleic acids for biological applications such as controlling or detecting gene expression. The natural presence and functionality of nucleic acids within living organisms coupled with their thermodynamic properties of base-pairing make them ideal for interfacing (and possibly altering) biological systems. We use engineered small conditional RNA or DNA (scRNA, scDNA, respectively) molecules to control and detect gene expression. Three novel systems are presented: two for conditional down-regulation of gene expression via RNA interference (RNAi) and a third system for simultaneous sensitive detection of multiple RNAs using labeled scRNAs.

RNAi is a powerful tool to study genetic circuits by knocking down a gene of interest. RNAi executes the logic: If gene Y is detected, silence gene Y. The fact that detection and silencing are restricted to the same gene means that RNAi is constitutively on. This poses a significant limitation when spatiotemporal control is needed. In this work, we engineered small nucleic acid molecules that execute the logic: If mRNA X is detected, form a Dicer substrate that targets independent mRNA Y for silencing. This is a step towards implementing the logic of conditional RNAi: If gene X is detected, silence gene Y. We use scRNAs and scDNAs to engineer signal transduction cascades that produce an RNAi effector molecule in response to hybridization to a nucleic acid target X. The first mechanism is solely based on hybridization cascades and uses scRNAs to produce a double-stranded RNA (dsRNA) Dicer substrate against target gene Y. The second mechanism is based on hybridization of scDNAs to detect a nucleic acid target and produce a template for transcription of a short hairpin RNA (shRNA) Dicer substrate against target gene Y. Test-tube studies for both mechanisms demonstrate that the output Dicer substrate is produced predominantly in the presence of a correct input target and is cleaved by Dicer to produce a small interfering RNA (siRNA). Both output products can lead to gene knockdown in tissue culture. To date, signal transduction is not observed in cells; possible reasons are explored.

Signal transduction cascades are composed of multiple scRNAs (or scDNAs). The need to study multiple molecules simultaneously has motivated the development of a highly sensitive method for multiplexed northern blots. The core technology of our system is the utilization of a hybridization chain reaction (HCR) of scRNAs as the detection signal for a northern blot. To achieve multiplexing (simultaneous detection of multiple genes), we use fluorescently tagged scRNAs. Moreover, by using radioactive labeling of scRNAs, the system exhibits a five-fold increase, compared to the literature, in detection sensitivity. Sensitive multiplexed northern blot detection provides an avenue for exploring the fate of scRNAs and scDNAs in tissue culture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Blazars are active galaxies with a jet closely oriented to our line of sight. They are powerful, variable emitters from radio to gamma-ray wavelengths. Although the general picture of synchrotron emission at low energies and inverse Compton at high energies is well established, important aspects of blazars are not well understood. In particular, the location of the gamma-ray emission region is not clearly established, with some theories favoring a location close to the central engine, while others place it at parsec scales in the radio jet.

We developed a program to locate the gamma-ray emission site in blazars, through the study of correlated variations between their gamma-ray and radio-wave emission. Correlated variations are expected when there is a relation between emission processes at both bands, while delays tell us about the relative location of their energy generation zones. Monitoring at 15 GHz using the Owens Valley Radio Observatory 40 meter telescope started in mid-2007. The program monitors 1593 blazars twice per week, including all blazars detected by the Fermi Gamma-ray Space Telescope (Fermi) north of -20 degrees declination. This program complements the continuous monitoring of gamma-rays by Fermi.

Three year long gamma-ray light curves for bright Fermi blazars are cross-correlated with four years of radio monitoring. The significance of cross-correlation peaks is investigated using simulations that account for the uneven sampling and noise properties of the light curves, which are modeled as red-noise processes with a simple power-law power spectral density. We found that out of 86 sources with high quality data, only three show significant correlations (AO 0235+164, B2 2308+34 and PKS 1502+106). Additionally, we find a significant correlation for Mrk 421 when including the strong gamma-ray/radio flare of late 2012. In all four cases radio variations lag gamma-ray variations, suggesting that the gamma-ray emission originates upstream of the radio emission. For PKS 1502+106 we locate the gamma-ray emission site parsecs away from the central engine, thus disfavoring the model of Blandford and Levinson (1995), while other cases are inconclusive. These findings show that continuous monitoring over long time periods is required to understand the cross-correlation between gamma-ray and radio-wave variability in most blazars.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Waking up from a dreamless sleep, I open my eyes, recognize my wife’s face and am filled with joy. In this thesis, I used functional Magnetic Resonance Imaging (fMRI) to gain insights into the mechanisms involved in this seemingly simple daily occurrence, which poses at least three great challenges to neuroscience: how does conscious experience arise from the activity of the brain? How does the brain process visual input to the point of recognizing individual faces? How does the brain store semantic knowledge about people that we know? To start tackling the first question, I studied the neural correlates of unconscious processing of invisible faces. I was unable to image significant activations related to the processing of completely invisible faces, despite existing reports in the literature. I thus moved on to the next question and studied how recognition of a familiar person was achieved in the brain; I focused on finding invariant representations of person identity – representations that would be activated any time we think of a familiar person, read their name, see their picture, hear them talk, etc. There again, I could not find significant evidence for such representations with fMRI, even in regions where they had previously been found with single unit recordings in human patients (the Jennifer Aniston neurons). Faced with these null outcomes, the scope of my investigations eventually turned back towards the technique that I had been using, fMRI, and the recently praised analytical tools that I had been trusting, Multivariate Pattern Analysis. After a mostly disappointing attempt at replicating a strong single unit finding of a categorical response to animals in the right human amygdala with fMRI, I put fMRI decoding to an ultimate test with a unique dataset acquired in the macaque monkey. There I showed a dissociation between the ability of fMRI to pick up face viewpoint information and its inability to pick up face identity information, which I mostly traced back to the poor clustering of identity selective units. Though fMRI decoding is a powerful new analytical tool, it does not rid fMRI of its inherent limitations as a hemodynamics-based measure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RNA interference (RNAi) is a powerful biological pathway allowing for sequence-specific knockdown of any gene of interest. While RNAi is a proven tool for probing gene function in biological circuits, it is limited by being constitutively ON and executes the logical operation: silence gene Y. To provide greater control over post-transcriptional gene silencing, we propose engineering a biological logic gate to implement “conditional RNAi.” Such a logic gate would silence gene Y only upon the expression of gene X, a completely unrelated gene, executing the logic: if gene X is transcribed, silence independent gene Y. Silencing of gene Y could be confined to a specific time and/or tissue by appropriately selecting gene X.

To implement the logic of conditional RNAi, we present the design and experimental validation of three nucleic acid self-assembly mechanisms which detect a sub-sequence of mRNA X and produce a Dicer substrate specific to gene Y. We introduce small conditional RNAs (scRNAs) to execute the signal transduction under isothermal conditions. scRNAs are small RNAs which change conformation, leading to both shape and sequence signal transduction, in response to hybridization to an input nucleic acid target. While all three conditional RNAi mechanisms execute the same logical operation, they explore various design alternatives for nucleic acid self-assembly pathways, including the use of duplex and monomer scRNAs, stable versus metastable reactants, multiple methods of nucleation, and 3-way and 4-way branch migration.

We demonstrate the isothermal execution of the conditional RNAi mechanisms in a test tube with recombinant Dicer. These mechanisms execute the logic: if mRNA X is detected, produce a Dicer substrate targeting independent mRNA Y. Only the final Dicer substrate, not the scRNA reactants or intermediates, is efficiently processed by Dicer. Additional work in human whole-cell extracts and a model tissue-culture system delves into both the promise and challenge of implementing conditional RNAi in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of pseudoephedrine as a practical chiral auxiliary for asymmetric synthesis is describe. Both enantiomers of pseudoephedrine are inexpensive commodity chemicals and can be N-acylated in high yields to form tertiary amides. In the presence of lithium chloride, the enolates of the corresponding pseudoephedrine amides undergo highly diastereoselective a1kylations with a wide range of alkyl halides to afford α-substituted products in high yields. These products can then be transformed in a single operation into highly enantiomerically enriched carboxylic acids, alcohols, and aldehydes. Lithium amidotrihydroborate (LAB) is shown to be a powerful reductant for the selective reduction of tertiary amides in general and pseudoephedrine amides in particular to form primary alcohols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the mechanisms of enzymes is crucial for our understanding of their role in biology and for designing methods to perturb or harness their activities for medical treatments, industrial processes, or biological engineering. One aspect of enzymes that makes them difficult to fully understand is that they are in constant motion, and these motions and the conformations adopted throughout these transitions often play a role in their function.

Traditionally, it has been difficult to isolate a protein in a particular conformation to determine what role each form plays in the reaction or biology of that enzyme. A new technology, computational protein design, makes the isolation of various conformations possible, and therefore is an extremely powerful tool in enabling a fuller understanding of the role a protein conformation plays in various biological processes.

One such protein that undergoes large structural shifts during different activities is human type II transglutaminase (TG2). TG2 is an enzyme that exists in two dramatically different conformational states: (1) an open, extended form, which is adopted upon the binding of calcium, and (2) a closed, compact form, which is adopted upon the binding of GTP or GDP. TG2 possess two separate active sites, each with a radically different activity. This open, calcium-bound form of TG2 is believed to act as a transglutaminse, where it catalyzes the formation of an isopeptide bond between the sidechain of a peptide-bound glutamine and a primary amine. The closed, GTP-bound conformation is believed to act as a GTPase. TG2 is also implicated in a variety of biological and pathological processes.

To better understand the effects of TG2’s conformations on its activities and pathological processes, we set out to design variants of TG2 isolated in either the closed or open conformations. We were able to design open-locked and closed-biased TG2 variants, and use these designs to unseat the current understanding of the activities and their concurrent conformations of TG2 and explore each conformation’s role in celiac disease models. This work also enabled us to help explain older confusing results in regards to this enzyme and its activities. The new model for TG2 activity has immense implications for our understanding of its functional capabilities in various environments, and for our ability to understand which conformations need to be inhibited in the design of new drugs for diseases in which TG2’s activities are believed to elicit pathological effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis summarizes the application of conventional and modern electron paramagnetic resonance (EPR) techniques to establish proximity relationships between paramagnetic metal centers in metalloproteins and between metal centers and magnetic ligand nuclei in two important and timely membrane proteins: succinate:ubiquinone oxidoreductase (SQR) from Paracoccus denitrificans and particulate methane monooxygenase (pMMO) from Methylococcus capsulatus. Such proximity relationships are thought to be critical to the biological function and the associated biochemistry mediated by the metal centers in these proteins. A mechanistic understanding of biological function relies heavily on structure-function relationships and the knowledge of how molecular structure and electronic properties of the metal centers influence the reactivity in metalloenzymes. EPR spectroscopy has proven to be one of the most powerful techniques towards obtaining information about interactions between metal centers as well as defining ligand structures. SQR is an electron transport enzyme wherein the substrates, organic and metallic cofactors are held relatively far apart. Here, the proximity relationships of the metallic cofactors were studied through their weak spin-spin interactions by means of EPR power saturation and electron spin-lattice (T_1) measurements, when the enzyme was poised at designated reduction levels. Analysis of the electron T_1 measurements for the S-3 center when the b-heme is paramagnetic led to a detailed analysis of the dipolar interactions and distance determination between two interacting metal centers. Studies of ligand environment of the metal centers by electron spin echo envelope modulation (ESEEM) spectroscopy resulted in the identication of peptide nitrogens as coupled nuclei in the environment of the S-1 and S-3 centers.

Finally, an EPR model was developed to describe the ferromagnetically coupled trinuclear copper clusters in pMMO when the enzyme is oxidized. The Cu(II) ions in these clusters appear to be strongly exchange coupled, and the EPR is consistent with equilateral triangular arrangements of type 2 copper ions. These results offer the first glimpse of the magneto-structural correlations for a trinuclear copper cluster of this type, which, until the work on pMMO, has had no precedent in the metalloprotein literature. Such trinuclear copper clusters are even rare in synthetic models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.

This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.