32 resultados para G-Functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.

The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.

Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.

Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.

An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Immunoglobulin G (IgG) is central in mediating host defense due to its ability to target and eliminate invading pathogens. The fragment antigen binding (Fab) regions are responsible for antigen recognition; however the effector responses are encoded on the Fc region of IgG. IgG Fc displays considerable glycan heterogeneity, accounting for its complex effector functions of inflammation, modulation and immune suppression. Intravenous immunoglobulin G (IVIG) is pooled serum IgG from multiple donors and is used to treat individuals with autoimmune and inflammatory disorders such as rheumatoid arthritis and Kawasaki’s disease, respectively. It contains all the subtypes of IgG (IgG1-4) and over 120 glycovariants due to variation of an Asparagine 297-linked glycan on the Fc. The species identified as the activating component of IVIG is sialylated IgG Fc. Comparisons of wild type Fc and sialylated Fc X-ray crystal structures suggests that sialylation causes an increase in conformational flexibility, which may be important for its anti-inflammatory properties.

Although glycan modifications can promote the anti-inflammatory properties of the Fc, there are amino acid substitutions that cause Fcs to initiate an enhanced immune response. Mutations in the Fc can cause up to a 100-fold increase in binding affinity to activating Fc gamma receptors located on immune cells, and have been shown to enhance antibody dependent cell-mediated cytotoxicity. This is important in developing therapeutic antibodies against cancer and infectious diseases. Structural studies of mutant Fcs in complex with activating receptors gave insight into new protein-protein interactions that lead to an enhanced binding affinity.

Together these studies show how dynamic and diverse the Fc region is and how both protein and carbohydrate modifications can alter structure, leading to IgG Fc’s switch from a pro-inflammatory to an anti-inflammatory protein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract to Part I

The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.

Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.

No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.

Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.

Abstract to Part II

Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.

We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.

The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield

ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV

The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.

A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives

ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV

Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.

First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.

Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.

Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.

Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.

We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.

We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to investigate to what extent the known theory of subdifferentiability and generic differentiability of convex functions defined on open sets can be carried out in the context of convex functions defined on not necessarily open sets. Among the main results obtained I would like to mention a Kenderov type theorem (the subdifferential at a generic point is contained in a sphere), a generic Gâteaux differentiability result in Banach spaces of class S and a generic Fréchet differentiability result in Asplund spaces. At least two methods can be used to prove these results: first, a direct one, and second, a more general one, based on the theory of monotone operators. Since this last theory was previously developed essentially for monotone operators defined on open sets, it was necessary to extend it to the context of monotone operators defined on a larger class of sets, our "quasi open" sets. This is done in Chapter III. As a matter of fact, most of these results have an even more general nature and have roots in the theory of minimal usco maps, as shown in Chapter II.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of eight related analogs of distamycin A has been synthesized. Footprinting and affinity cleaving reveal that only two of the analogs, pyridine-2- car box amide-netropsin (2-Py N) and 1-methylimidazole-2-carboxamide-netrops in (2-ImN), bind to DNA with a specificity different from that of the parent compound. A new class of sites, represented by a TGACT sequence, is a strong site for 2-PyN binding, and the major recognition site for 2-ImN on DNA. Both compounds recognize the G•C bp specifically, although A's and T's in the site may be interchanged without penalty. Additional A•T bp outside the binding site increase the binding affinity. The compounds bind in the minor groove of the DNA sequence, but protect both grooves from dimethylsulfate. The binding evidence suggests that 2-PyN or 2-ImN binding induces a DNA conformational change.

In order to understand this sequence specific complexation better, the Ackers quantitative footprinting method for measuring individual site affinity constants has been extended to small molecules. MPE•Fe(II) cleavage reactions over a 10^5 range of free ligand concentrations are analyzed by gel electrophoresis. The decrease in cleavage is calculated by densitometry of a gel autoradiogram. The apparent fraction of DNA bound is then calculated from the amount of cleavage protection. The data is fitted to a theoretical curve using non-linear least squares techniques. Affinity constants at four individual sites are determined simultaneously. The distamycin A analog binds solely at A•T rich sites. Affinities range from 10^(6)- 10^(7)M^(-1) The data for parent compound D fit closely to a monomeric binding curve. 2-PyN binds both A•T sites and the TGTCA site with an apparent affinity constant of 10^(5) M^(-1). 2-ImN binds A•T sites with affinities less than 5 x 10^(4) M^(-1). The affinity of 2-ImN for the TGTCA site does not change significantly from the 2-PyN value. At the TGTCA site, the experimental data fit a dimeric binding curve better than a monomeric curve. Both 2-PyN and 2-ImN have substantially lower DNA affinities than closely related compounds.

In order to probe the requirements of this new binding site, fourteen other derivatives have been synthesized and tested. All compounds that recognize the TGTCA site have a heterocyclic aromatic nitrogen ortho to the N or C-terminal amide of the netropsin subunit. Specificity is strongly affected by the overall length of the small molecule. Only compounds that consist of at least three aromatic rings linked by amides exhibit TGTCA site binding. Specificity is only weakly altered by substitution on the pyridine ring, which correlates best with steric factors. A model is proposed for TGTCA site binding that has as its key feature hydrogen bonding to both G's by the small molecule. The specificity is determined by the sequence dependence of the distance between G's.

One derivative of 2-PyN exhibits pH dependent sequence specificity. At low pH, 4-dimethylaminopyridine-2-carboxamide-netropsin binds tightly to A•T sites. At high pH, 4-Me_(2)NPyN binds most tightly to the TGTCA site. In aqueous solution, this compound protonates at the pyridine nitrogen at pH 6. Thus presence of the protonated form correlates with A•T specificity.

The binding site of a class of eukaryotic transcriptional activators typified by yeast protein GCN4 and the mammalian oncogene Jun contains a strong 2-ImN binding site. Specificity requirements for the protein and small molecule are similar. GCN4 and 2-lmN bind simultaneously to the same binding site. GCN4 alters the cleavage pattern of 2-ImN-EDTA derivative at only one of its binding sites. The details of the interaction suggest that GCN4 alters the conformation of an AAAAAAA sequence adjacent to its binding site. The presence of a yeast counterpart to Jun partially blocks 2-lmN binding. The differences do not appear to be caused by direct interactions between 2-lmN and the proteins, but by induced conformational changes in the DNA protein complex. It is likely that the observed differences in complexation are involved in the varying sequence specificity of these proteins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The core-level energy shifts observed using X-ray photoelectron spectroscopy (XPS) have been used to determine the band bending at Si(111) surfaces terminated with Si-Br, Si-H, and Si-CH3 groups, respectively. The surface termination influenced the band bending, with the Si 2p3/2 binding energy affected more by the surface chemistry than by the dopant type. The highest binding energies were measured on Si(111)-Br (whose Fermi level was positioned near the conduction band at the surface), followed by Si(111)-H, followed by Si(111)-CH3 (whose Fermi level was positioned near mid-gap at the surface). Si(111)-CH3 surfaces exposed to Br2(g) yielded the lowest binding energies, with the Fermi level positioned between mid-gap and the valence band. The Fermi level position of Br2(g)-exposed Si(111)-CH3 was consistent with the presence of negatively charged bromine-containing ions on such surfaces. The binding energies of all of the species detected on the surface (C, O, Br) shifted with the band bending, illustrating the importance of isolating the effects of band bending when measuring chemical shifts on semiconductor surfaces. The influence of band bending was confirmed by surface photovoltage (SPV) measurements, which showed that the core levels shifted toward their flat-band values upon illumination. Where applicable, the contribution from the X-ray source to the SPV was isolated and quantified. Work functions were measured by ultraviolet photoelectron spectroscopy (UPS), allowing for calculation of the sign and magnitude of the surface dipole in such systems. The values of the surface dipoles were in good agreement with previous measurements as well as with electronegativity considerations. The binding energies of the adventitious carbon signals were affected by band bending as well as by the surface dipole. A model of band bending in which charged surface states are located exterior to the surface dipole is consistent with the XPS and UPS behavior of the chemically functionalized Si(111) surfaces investigated herein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation describes studies of G protein-coupled receptors (GPCRs) and ligand-gated ion channels (LGICs) using unnatural amino acid mutagenesis to gain high precision insights into the function of these important membrane proteins.

Chapter 2 considers the functional role of highly conserved proline residues within the transmembrane helices of the D2 dopamine GPCR. Through mutagenesis employing unnatural α-hydroxy acids, proline analogs, and N-methyl amino acids, we find that lack of backbone hydrogen bond donor ability is important to proline function. At one proline site we additionally find that a substituent on the proline backbone N is important to receptor function.

In Chapter 3, side chain conformation is probed by mutagenesis of GPCRs and the muscle-type nAChR. Specific side chain rearrangements of highly conserved residues have been proposed to accompany activation of these receptors. These rearrangements were probed using conformationally-biased β-substituted analogs of Trp and Phe and unnatural stereoisomers of Thr and Ile. We also modeled the conformational bias of the unnatural Trp and Phe analogs employed.

Chapters 4 and 5 examine details of ligand binding to nAChRs. Chapter 4 describes a study investigating the importance of hydrogen bonds between ligands and the complementary face of muscle-type and α4β4 nAChRs. A hydrogen bond involving the agonist appears to be important for ligand binding in the muscle-type receptor but not the α4β4 receptor.

Chapter 5 describes a study characterizing the binding of varenicline, an actively prescribed smoking cessation therapeutic, to the α7 nAChR. Additionally, binding interactions to the complementary face of the α7 binding site were examined for a small panel of agonists. We identified side chains important for binding large agonists such as varenicline, but dispensable for binding the small agonist ACh.

Chapter 6 describes efforts to image nAChRs site-specifically modified with a fluorophore by unnatural amino acid mutagenesis. While progress was hampered by high levels of fluorescent background, improvements to sample preparation and alternative strategies for fluorophore incorporation are described.

Chapter 7 describes efforts toward a fluorescence assay for G protein association with a GPCR, with the ultimate goal of probing key protein-protein interactions along the G protein/receptor interface. A wide range of fluorescent protein fusions were generated, expressed in Xenopus oocytes, and evaluated for their ability to associate with each other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As evolution progresses, developmental changes occur. Genes lose and gain molecular partners, regulatory sequences, and new functions. As a consequence, tissues evolve alternative methods to develop similar structures, more or less robust. How this occurs is a major question in biology. One method of addressing this question is by examining the developmental and genetic differences between similar species. Several studies of nematodes Pristionchus pacificus and Oscheius CEW1 have revealed various differences in vulval development from the well-studied C. elegans (e.g. gonad induction, competence group specification, and gene function.)

I approached the question of developmental change in a similar manner by using Caenorhabditis briggsae, a close relative of C. elegans. C. briggsae allows the use of transgenic approaches to determine developmental changes between species. We determined subtle changes in the competence group, in 1° cell specification, and vulval lineage.

We also analyzed the let-60 gene in four nematode species. We found conservation in the codon identity and exon-intron boundaries, but lack of an extended 3' untranslated region in Caenorhabditis briggsae.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The neonatal Fe receptor (FeRn) binds the Fe portion of immunoglobulin G (IgG) at the acidic pH of endosomes or the gut and releases IgG at the alkaline pH of blood. FeRn is responsible for the maternofetal transfer of IgG and for rescuing endocytosed IgG from a default degradative pathway. We investigated how FeRn interacts with IgG by constructing a heterodimeric form of the Fe (hdFc) that contains one FeRn binding site. This molecule was used to characterize the interaction between one FeRn molecule and one Fe and to determine under what conditions FeRn forms a dimer. The hdFc binds one FeRn molecule at pH 6.0 with a K_d of 80 nM. In solution and with FeRn anchored to solid supports, the heterodimeric Fe does not induce a dimer of FeRn molecules. FcRnhdFc complex crystals were obtained and the complex structure was solved to 2.8 Å resolution. Analysis of this structure refined the understanding of the mechanism of the pH-dependent binding, shed light on the role played by carbohydrates in the Fe binding, and provided insights on how to design therapeutic IgG antibodies with longer serum half-lives. The FcRn-hdFc complex in the crystal did not contain the FeRn dimer. To characterize the tendency of FeRn to form a dimer in a membrane we analyzed the tendency of the hdFc to induce cross-phosphorylation of FeRn-tyrosine kinase chimeras. We also constructed FeRn-cyan and FeRn-yellow fluorescent proteins and have analyzed the tendency of these molecules to exhibit fluorescence resonance energy transfer. As of now, neither of these analyses have lead to conclusive results. In the process of acquiring the context to appreciate the structure of the FcRn-hdFc interface, we developed a study of 171 other nonobligate protein-protein interfaces that includes an original principal component analysis of the quantifiable aspects of these interfaces.