963 resultados para INVARIANT SUBSPACES
Resumo:
The invariant chain (Ii) prevents binding of ligands to major histocompatibility complex (MHC) class II molecules in the endoplasmic reticulum and during intracellular transport. Stepwise removal of the Ii in a trans-Golgi compartment renders MHC class II molecules accessible for peptide loading, with CLIP (class II-associated Ii peptides) as the final fragment to be released. Here we show that CLIP can be subdivided into distinct functional regions. The C-terminal segment (residues 92-105) of the CLIP-(81-105) fragment mediates inhibition of self- and antigenic peptide binding to HLA-DR2 molecules. In contrast, the N-terminal segment CLIP-(81-98) binds to the Staphylococcus aureus enterotoxin B contact site outside the peptide-binding groove on the alpha 1 domain and does not interfere with peptide binding. Its functional significance appears to lie in the contribution to CLIP removal: the dissociation of CLIP-(81-105) is characterized by a fast off-rate, which is accelerated at endosomal pH, whereas in the absence of the N-terminal CLIP-(81-91), the off-rate of C-terminal CLIP-(92-105) is slow and remains unaltered at low pH. Mechanistically, the N-terminal segment of CLIP seems to prevent tight interactions of CLIP side chains with specificity pockets in the peptide-binding groove that normally occurs during maturation of long-lived class II-peptide complexes.
Resumo:
CD4+ T cells recognize major histocompatibility complex (MHC) class II-bound peptides that are primarily obtained from extracellular sources. Endogenously synthesized proteins that readily enter the MHC class I presentation pathway are generally excluded from the MHC class II presentation pathway. We show here that endogenously synthesized ovalbumin or hen egg lysozyme can be efficiently presented as peptide-MHC class II complexes when they are expressed as fusion proteins with the invariant chain (Ii). Similar to the wild-type Ii, the Ii-antigen fusion proteins were associated intracellularly with MHC molecules. Most efficient expression of endogenous peptide-MHC complex was obtained with fusion proteins that contained the endosomal targeting signal within the N-terminal cytoplasmic Ii residues but did not require the luminal residues of Ii that are known to bind MHC molecules. These results suggest that signals within the Ii can allow endogenously synthesized proteins to efficiently enter the MHC class II presentation pathway. They also suggest a strategy for identifying unknown antigens presented by MHC class II molecules.
Resumo:
Includes bibliographies (p. 27-29).
Resumo:
Entanglement is defined for each vector subspace of the tensor product of two finite-dimensional Hilbert spaces, by applying the notion of operator entanglement to the projection operator onto that subspace. The operator Schmidt decomposition of the projection operator defines a string of Schmidt coefficients for each subspace, and this string is assumed to characterize its entanglement, so that a first subspace is more entangled than a second, if the Schmidt string of the second majorizes the Schmidt string of the first. The idea is applied to the antisymmetric and symmetric tensor products of a finite-dimensional Hilbert space with itself, and also to the tensor product of an angular momentum j with a spin 1/2. When adapted to the subspaces of states of the nonrelativistic hydrogen atom with definite total angular momentum (orbital plus spin), within the space of bound states with a given total energy, this leads to a complete ordering of those subspaces by their Schmidt strings.
Resumo:
Let Q be a stable and conservative Q-matrix over a countable state space S consisting of an irreducible class C and a single absorbing state 0 that is accessible from C. Suppose that Q admits a finite mu-subinvariant measure in on C. We derive necessary and sufficient conditions for there to exist a Q-process for which m is mu-invariant on C, as well as a necessary condition for the uniqueness of such a process.
Resumo:
Concepts of constant absolute risk aversion and constant relative risk aversion have proved useful in the analysis of choice under uncertainty, but are quite restrictive, particularly when they are imposed jointly. A generalization of constant risk aversion, referred to as invariant risk aversion is developed. Invariant risk aversion is closely related to the possibility of representing preferences over state-contingent income vectors in terms of two parameters, the mean and a linearly homogeneous, translation-invariant index of riskiness. The best-known index with such properties is the standard deviation. The properties of the capital asset pricing model, usually expressed in terms of the mean and standard deviation, may be extended to the case of general invariant preferences. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
We show that quantum information can be encoded into entangled states of multiple indistinguishable particles in such a way that any inertial observer can prepare, manipulate, or measure the encoded state independent of their Lorentz reference frame. Such relativistically invariant quantum information is free of the difficulties associated with encoding into spin or other degrees of freedom in a relativistic context.
Resumo:
Three apparently distinct and different approaches have been proposed to account for the crystallographic features of diffusion-controlled precipitation. These three models are based on (a) an invariant line in the habit plane, (b) the parallelism of a pair of Deltags that are perpendicular to the habit plane and (c) the parallelism of a pair of Moire fringes that are in turn parallel to the habit plane. The purpose of the present paper is to show that these approaches are in fact absolutely equivalent and that when certain conditions are satisfied they are essentially the same as the recent edge-to-edge matching model put forward by the authors. (C) 2004 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Variable-frequency pulsed electron paramagnetic resonance studies of the molybdenum(V) center of sulfite dehydrogenase (SDH) clearly show couplings from nearby exchangeable protons that are assigned to a (MoOHn)-O-v group. The hyperfine parameters for these exchangeable protons of SDH are the same at both low and high pH and similar to those for the high-pH forms of sulfite oxidases (SOs) from eukaryotes. The SDH proton parameters are distinctly different from the low-pH forms of chicken and human so.
Resumo:
Let S be a countable set and let Q = (q(ij), i, j is an element of S) be a conservative q-matrix over S with a single instantaneous state b. Suppose that we are given a real number mu >= 0 and a strictly positive probability measure m = (m(j), j is an element of S) such that Sigma(i is an element of S) m(i)q(ij) = -mu m(j), j 0 b. We prove that there exists a Q-process P(t) = (p(ij) (t), i, j E S) for which m is a mu-invariant measure, that is Sigma(i is an element of s) m(i)p(ij)(t) = e(-mu t)m(j), j is an element of S. We illustrate our results with reference to the Kolmogorov 'K 1' chain and a birth-death process with catastrophes and instantaneous resurrection.
Resumo:
We present a Lorentz invariant extension of a previous model for intrinsic decoherence (Milburn 1991 Phys. Rev. A 44 5401). The extension uses unital semigroup representations of space and time translations rather than the more usual unitary representation, and does the least violence to physically important invariance principles. Physical consequences include a modification of the uncertainty principle and a modification of field dispersion relations, similar to modifications suggested by quantum gravity and string theory, but without sacrificing Lorentz invariance. Some observational signatures are discussed.
Resumo:
Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences are invariant measurements of the divergence between two distributions. The posterior average information divergence is used to measure the generalisation ability of a network. The optimal estimators for multinomial distributions with Dirichlet priors are studied in detail. This confirms that the definition is compatible with intuition. The results also show that many commonly used methods can be put under this unified framework, by assume special priors and special divergences.
Resumo:
A family of measurements of generalisation is proposed for estimators of continuous distributions. In particular, they apply to neural network learning rules associated with continuous neural networks. The optimal estimators (learning rules) in this sense are Bayesian decision methods with information divergence as loss function. The Bayesian framework guarantees internal coherence of such measurements, while the information geometric loss function guarantees invariance. The theoretical solution for the optimal estimator is derived by a variational method. It is applied to the family of Gaussian distributions and the implications are discussed. This is one in a series of technical reports on this topic; it generalises the results of ¸iteZhu95:prob.discrete to continuous distributions and serve as a concrete example of a larger picture ¸iteZhu95:generalisation.
Resumo:
The problem of evaluating different learning rules and other statistical estimators is analysed. A new general theory of statistical inference is developed by combining Bayesian decision theory with information geometry. It is coherent and invariant. For each sample a unique ideal estimate exists and is given by an average over the posterior. An optimal estimate within a model is given by a projection of the ideal estimate. The ideal estimate is a sufficient statistic of the posterior, so practical learning rules are functions of the ideal estimator. If the sole purpose of learning is to extract information from the data, the learning rule must also approximate the ideal estimator. This framework is applicable to both Bayesian and non-Bayesian methods, with arbitrary statistical models, and to supervised, unsupervised and reinforcement learning schemes.