974 resultados para Nádia Wolf
Resumo:
Neuroblastoma has successfully served as a model system for the identification of neuroectoderm-derived oncogenes. However, in spite of various efforts, only a few clinically useful prognostic markers have been found. Here, we present a framework, which integrates DNA, RNA and tissue data to identify and prioritize genetic events that represent clinically relevant new therapeutic targets and prognostic biomarkers for neuroblastoma.
Resumo:
Aneuploidy is among the most obvious differences between normal and cancer cells. However, mechanisms contributing to development and maintenance of aneuploid cell growth are diverse and incompletely understood. Functional genomics analyses have shown that aneuploidy in cancer cells is correlated with diffuse gene expression signatures and that aneuploidy can arise by a variety of mechanisms, including cytokinesis failures, DNA endoreplication and possibly through polyploid intermediate states. Here, we used a novel cell spot microarray technique to identify genes with a loss-of-function effect inducing polyploidy and/or allowing maintenance of polyploid cell growth of breast cancer cells. Integrative genomics profiling of candidate genes highlighted GINS2 as a potential oncogene frequently overexpressed in clinical breast cancers as well as in several other cancer types. Multivariate analysis indicated GINS2 to be an independent prognostic factor for breast cancer outcome (p = 0.001). Suppression of GINS2 expression effectively inhibited breast cancer cell growth and induced polyploidy. In addition, protein level detection of nuclear GINS2 accurately distinguished actively proliferating cancer cells suggesting potential use as an operational biomarker.
Resumo:
Adherent cells undergo remarkable changes in shape during cell division. However, the functional interplay between cell adhesion turnover and the mitotic machinery is poorly understood. The endo/exocytic trafficking of integrins is regulated by the small GTPase Rab21, which associates with several integrin alpha subunits. Here, we show that targeted trafficking of integrins to and from the cleavage furrow is required for successful cytokinesis, and that this is regulated by Rab21. Rab21 activity, integrin-Rab21 association, and integrin endocytosis are all necessary for normal cytokinesis, which becomes impaired when integrin-mediated adhesion at the cleavage furrow fails. We also describe a chromosomal deletion and loss of Rab21 gene expression in human cancer, which leads to the accumulation of multinucleate cells. Importantly, reintroduction of Rab21 rescued this phenotype. In conclusion, Rab21-regulated integrin trafficking is essential for normal cell division, and its defects may contribute to multinucleation and genomic instability, which are hallmarks of cancer.
A new look towards BAC-based array CGH through a comprehensive comparison with oligo-based array CGH
Resumo:
We present a complete solution to the problem of coherent-mode decomposition of the most general anisotropic Gaussian Schell-model (AGSM) beams, which constitute a ten-parameter family. Our approach is based on symmetry considerations. Concepts and techniques familiar from the context of quantum mechanics in the two-dimensional plane are used to exploit the Sp(4, R) dynamical symmetry underlying the AGSM problem. We take advantage of the fact that the symplectic group of first-order optical system acts unitarily through the metaplectic operators on the Hilbert space of wave amplitudes over the transverse plane, and, using the Iwasawa decomposition for the metaplectic operator and the classic theorem of Williamson on the normal forms of positive definite symmetric matrices under linear canonical transformations, we demonstrate the unitary equivalence of the AGSM problem to a separable problem earlier studied by Li and Wolf [Opt. Lett. 7, 256 (1982)] and Gori and Guattari [Opt. Commun. 48, 7 (1983)]. This conn ction enables one to write down, almost by inspection, the coherent-mode decomposition of the general AGSM beam. A universal feature of the eigenvalue spectrum of the AGSM family is noted.
Resumo:
We consider the problem of compression via homomorphic encoding of a source having a group alphabet. This is motivated by the problem of distributed function computation, where it is known that if one is only interested in computing a function of several sources, then one can at times improve upon the compression rate required by the Slepian-Wolf bound. The functions of interest are those which could be represented by the binary operation in the group. We first consider the case when the source alphabet is the cyclic Abelian group, Zpr. In this scenario, we show that the set of achievable rates provided by Krithivasan and Pradhan [1], is indeed the best possible. In addition to that, we provide a simpler proof of their achievability result. In the case of a general Abelian group, an improved achievable rate region is presented than what was obtained by Krithivasan and Pradhan. We then consider the case when the source alphabet is a non-Abelian group. We show that if all the source symbols have non-zero probability and the center of the group is trivial, then it is impossible to compress such a source if one employs a homomorphic encoder. Finally, we present certain non-homomorphic encoders, which also are suitable in the context of function computation over non-Abelian group sources and provide rate regions achieved by these encoders.
Resumo:
Pyruvate conversion to acetyl-CoA by the pyruvate dehydrogenase (PDH) multienzyme complex is known as a key node in affecting the metabolic fluxes of animal cell culture. However, its possible role in causing possible nonlinear dynamic behavior such as oscillations and multiplicity of animal cells has received little attention. In this work, the kinetic and dynamic behavior of PDH of eucaryotic cells has been analyzed by using both in vitro and simplified in vivo models. With the in vitro model the overall reaction rate (v(1)) of PDH is shown to be a nonlinear function of pyruvate concentration, leading to oscillations under certain conditions. All enzyme components affect v, and the nonlinearity of PDH significantly, the protein X and the core enzyme dihydrolipoamide acyltransferase (E2) being mostly predominant. By considering the synthesis rates of pyruvate and PDH components the in vitro model is expanded to emulate in vivo conditions. Analysis using the in vivo model reveals another interesting kinetic feature of the PDH system, namely, multiple steady states. Depending on the pyruvate and enzyme levels or the operation mode, either a steady state with high pyruvate decarboxylation rate or a steady state with significantly lower decarboxylation rate can be achieved under otherwise identical conditions. In general, the more efficient steady state is associated with a lower pyruvate concentration. A possible time delay in the substrate supply and enzyme synthesis can also affect the steady state to be achieved and lead's to oscillations under certain conditions. Overall, the predictions of multiplicity for the PDH system agree qualitatively well with recent experimental observations in animal cell cultures. The model analysis gives some hints for improving pyruavte metabolism in animal cell culture.
Resumo:
Results of a study of dc magnetization M(T,H), performed on a Nd(0.6)Pb(0.4)MnO(3) single crystal in the temperature range around T(C) (Curie temperature) which embraces the supposed critical region \epsilon\=\T-T(C)\/T(C)less than or equal to0.05 are reported. The magnetic data analyzed in the critical region using the Kouvel-Fisher method give the values for the T(C)=156.47+/-0.06 K and the critical exponents beta=0.374+/-0.006 (from the temperature dependence of magnetization) and gamma=1.329+/-0.003 (from the temperature dependence of initial susceptibility). The critical isotherm M(T(C),H) gives delta=4.54+/-0.10. Thus the scaling law gamma+beta=deltabeta is fulfilled. The critical exponents obey the single scaling equation of state M(H,epsilon)=epsilon(beta)f(+/-)(H/epsilon(beta+gamma)), where f(+) for T>T(C) and f(-) for T
Resumo:
We consider the problem of compression of a non-Abelian source.This is motivated by the problem of distributed function computation,where it is known that if one is only interested in computing a function of several sources, then one can often improve upon the compression rate required by the Slepian-Wolf bound. Let G be a non-Abelian group having center Z(G). We show here that it is impossible to compress a source with symbols drawn from G when Z(G) is trivial if one employs a homomorphic encoder and a typical-set decoder.We provide achievable upper bounds on the minimum rate required to compress a non-Abelian group with non-trivial center. Also, in a two source setting, we provide achievable upper bounds for compression of any non-Abelian group, using a non-homomorphic encoder.
Resumo:
In this paper, we explore the use of LDPC codes for nonuniform sources under distributed source coding paradigm. Our analysis reveals that several capacity approaching LDPC codes indeed do approach the Slepian-Wolf bound for nonuniform sources as well. The Monte Carlo simulation results show that highly biased sources can be compressed to 0.049 bits/sample away from Slepian-Wolf bound for moderate block lengths.
Resumo:
The setting considered in this paper is one of distributed function computation. More specifically, there is a collection of N sources possessing correlated information and a destination that would like to acquire a specific linear combination of the N sources. We address both the case when the common alphabet of the sources is a finite field and the case when it is a finite, commutative principal ideal ring with identity. The goal is to minimize the total amount of information needed to be transmitted by the N sources while enabling reliable recovery at the destination of the linear combination sought. One means of achieving this goal is for each of the sources to compress all the information it possesses and transmit this to the receiver. The Slepian-Wolf theorem of information theory governs the minimum rate at which each source must transmit while enabling all data to be reliably recovered at the receiver. However, recovering all the data at the destination is often wasteful of resources since the destination is only interested in computing a specific linear combination. An alternative explored here is one in which each source is compressed using a common linear mapping and then transmitted to the destination which then proceeds to use linearity to directly recover the needed linear combination. The article is part review and presents in part, new results. The portion of the paper that deals with finite fields is previously known material, while that dealing with rings is mostly new.Attempting to find the best linear map that will enable function computation forces us to consider the linear compression of source. While in the finite field case, it is known that a source can be linearly compressed down to its entropy, it turns out that the same does not hold in the case of rings. An explanation for this curious interplay between algebra and information theory is also provided in this paper.
Resumo:
The critical behaviour has been investigated in single crystalline Nd0.6Pb0.4MnO3 near the paramagnetic to ferromagnetic transition temperature (TC) by static magnetic measurements. The values of TC and the critical exponents β, γ and δ are estimated by analysing the data in the critical region. The exponent values are very close to those expected for 3D Heisenberg ferromagnets with short-range interactions. Specific heat measurements show a broad cusp at TC (i.e., exponent α<0) being consistent with Heisenberg-like behaviour.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.