341 resultados para Topological entropy
Resumo:
The structure of the hydrogen bond network is a key element for understanding water's thermodynamic and kinetic anomalies. While ambient water is strongly believed to be a uniform, continuous hydrogen-bonded liquid, there is growing consensus that supercooled water is better described in terms of distinct domains with either a low-density ice-like structure or a high-density disordered one. We evidenced two distinct rotational mobilities of probe molecules in interstitial supercooled water of polycrystalline ice Banerjee D, et al. (2009) ESR evidence for 2 coexisting liquid phases in deeply supercooled bulk water. Proc Natl Acad Sci USA 106: 11448-11453]. Here we show that, by increasing the confinement of interstitial water, the mobility of probe molecules, surprisingly, increases. We argue that loose confinement allows the presence of ice-like regions in supercooled water, whereas a tighter confinement yields the suppression of this ordered fraction and leads to higher fluidity. Compelling evidence of the presence of ice-like regions is provided by the probe orientational entropy barrier which is set, through hydrogen bonding, by the configuration of the surrounding water molecules and yields a direct measure of the configurational entropy of the same. We find that, under loose confinement of supercooled water, the entropy barrier surmounted by the slower probe fraction exceeds that of equilibrium water by the melting entropy of ice, whereas no increase of the barrier is observed under stronger confinement. The lower limit of metastability of supercooled water is discussed.
Resumo:
Systematic measurements pertinent to the magnetocaloric effect and nature of magnetic transition around the transition temperature are performed in the 10 nm Pr0.5Ca0.5MnO3 nanoparticles (PCMO10). Maxwell's relation is employed to estimate the change in magnetic entropy. At Curie temperature (T-C) similar to 83.5 K, the change in magnetic entropy (-Delta S-M) discloses a typical variation with a value 0.57 J/kg K, and is found to be magnetic field dependent. From the area under the curve (Delta S vs T), the refrigeration capacity is calculated at T-C similar to 83.5K and it is found to be 7.01 J/kg. Arrott plots infer that due to the competition between the ferromagnetic and anti-ferromagnetic interactions, the magnetic phase transition in PCMO10 is broadly spread over both in temperature as well as magnetic field coordinates. Upon tuning the particle size, size distribution, morphology, and relative fraction of magnetic phases, it may be possible to enhance the magnetocalorific effect further in PCMO10. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4759372]
Resumo:
The fidelity of the folding pathways being encoded in the amino acid sequence is met with challenge in instances where proteins with no sequence homology, performing different functions and no apparent evolutionary linkage, adopt a similar fold. The problem stated otherwise is that a limited fold space is available to a repertoire of diverse sequences. The key question is what factors lead to the formation of a fold from diverse sequences. Here, with the NAD(P)-binding Rossmann fold domains as a case study and using the concepts of network theory, we have unveiled the consensus structural features that drive the formation of this fold. We have proposed a graph theoretic formalism to capture the structural details in terms of the conserved atomic interactions in global milieu, and hence extract the essential topological features from diverse sequences. A unified mathematical representation of the different structures together with a judicious concoction of several network parameters enabled us to probe into the structural features driving the adoption of the NAD(P)-binding Rossmann fold. The atomic interactions at key positions seem to be better conserved in proteins, as compared to the residues participating in these interactions. We propose a ``spatial motif'' and several ``fold specific hot spots'' that form the signature structural blueprints of the NAD(P)-binding Rossmann fold domain. Excellent agreement of our data with previous experimental and theoretical studies validates the robustness and validity of the approach. Additionally, comparison of our results with statistical coupling analysis (SCA) provides further support. The methodology proposed here is general and can be applied to similar problems of interest.
Resumo:
Phase equilibria in the system Tm-Rh-O at 1200 K is established by isothermal equilibration of selected compositions and phase identification after quenching to room temperature. Six intermetallic phases (Tm3Rh, Tm7Rh3, Tm5Rh3, Tm3Rh2, TmRh, TmRh2 +/-delta) and a ternary oxide TmRhO3 are identified. Based on experimentally determined phase relations, a solid-state electrochemical cell is devised to measure the standard free energy of formation of orthorhombic perovskite TmRhO3 from cubic Tm2O3 and beta-Rh2O3 in the temperature range from (900 to 1300) K. The results can be summarized as: Delta G(f,ox)(o) +/- 104/J.mol(-1) = -46474 + 3.925(T/K). Invoking the Neumann-Kopp rule, the standard enthalpy of formation of TmRhO3 from its constituent elements at 298.15 K is estimated as -1193.89 (+/- 2.86) kJ.mol(-1). The standard entropy of TmRhO3 at 298.15 K is evaluated as 103.8 (+/- 1.6) J.mol(-1).K-1. The oxygen potential-composition diagram and three-dimensional chemical potential diagram at 1200 K and temperature-composition diagrams at constant partial pressures of oxygen are computed from thermodynamic data. The compound TmRhO3 decomposes at 1688 (+/- 2) K in pure oxygen and at 1583 (+/- 2) K in air at standard pressure.
Resumo:
We investigate the effect of bilayer melting transition on thermodynamics and dynamics of interfacial water using molecular dynamics simulation with the two-phase thermodynamic model. We show that the diffusivity of interface water depicts a dynamic crossover at the chain melting transition following an Arrhenius behavior until the transition temperature. The corresponding change in the diffusion coefficient from the bulk to the interface water is comparable with experimental observations found recently for water near 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) vesicles Phys. Chem. Chem. Phys. 13, 7732 (2011)]. The entropy and potential energy of interfacial water show distinct changes at the bilayer melting transition, indicating a strong correlation in the thermodynamic state of water and the accompanying first-order phase transition of the bilayer membrane. DOI: 10.1103/PhysRevLett.110.018303
Resumo:
Background: The correlation of genetic distances between pairs of protein sequence alignments has been used to infer protein-protein interactions. It has been suggested that these correlations are based on the signal of co-evolution between interacting proteins. However, although mutations in different proteins associated with maintaining an interaction clearly occur (particularly in binding interfaces and neighbourhoods), many other factors contribute to correlated rates of sequence evolution. Proteins in the same genome are usually linked by shared evolutionary history and so it would be expected that there would be topological similarities in their phylogenetic trees, whether they are interacting or not. For this reason the underlying species tree is often corrected for. Moreover processes such as expression level, are known to effect evolutionary rates. However, it has been argued that the correlated rates of evolution used to predict protein interaction explicitly includes shared evolutionary history; here we test this hypothesis. Results: In order to identify the evolutionary mechanisms giving rise to the correlations between interaction proteins, we use phylogenetic methods to distinguish similarities in tree topologies from similarities in genetic distances. We use a range of datasets of interacting and non-interacting proteins from Saccharomyces cerevisiae. We find that the signal of correlated evolution between interacting proteins is predominantly a result of shared evolutionary rates, rather than similarities in tree topology, independent of evolutionary divergence. Conclusions: Since interacting proteins do not have tree topologies that are more similar than the control group of non-interacting proteins, it is likely that coevolution does not contribute much to, if any, of the observed correlations.
Resumo:
The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.
Resumo:
Topoisomerases (topos) maintain DNA topology and influence DNA transaction processes by catalysing relaxation, supercoiling and decatenation reactions. In the cellular milieu, division of labour between different topos ensures topological homeostasis and control of central processes. In Escherichia coli, DNA gyrase is the principal enzyme that carries out negative supercoiling, while topo IV catalyses decatenation, relaxation and unknotting. DNA gyrase apparently has the daunting task of undertaking both the enzyme functions in mycobacteria, where topo IV is absent. We have shown previously that mycobacterial DNA gyrase is an efficient decatenase. Here, we demonstrate that the strong decatenation property of the enzyme is due to its ability to capture two DNA segments in trans. Topo IV, a strong dedicated decatenase of E. coli, also captures two distinct DNA molecules in a similar manner. In contrast, E. coli DNA gyrase, which is a poor decatenase, does not appear to be able to hold two different DNA molecules in a stable complex. The binding of a second DNA molecule to GyrB/ParE is inhibited by ATP and the non-hydrolysable analogue, AMPPNP, and by the substitution of a prominent positively charged residue in the GyrB N-terminal cavity, suggesting that this binding represents a potential T-segment positioned in the cavity. Thus, after the GyrA/ParC mediated initial DNA capture, GyrB/ParE would bind efficiently to a second DNA in trans to form a T-segment prior to nucleotide binding and closure of the gate during decatenation.
Resumo:
Intrinsically disordered proteins, IDPs, are proteins that lack a rigid 3D structure under physiological conditions, at least in vitro. Despite the lack of structure, IDPs play important roles in biological processes and transition from disorder to order upon binding to their targets. With multiple conformational states and rapid conformational dynamics, they engage in myriad and often ``promiscuous'' interactions. These stochastic interactions between IDPs and their partners, defined here as conformational noise, is an inherent characteristic of IDP interactions. The collective effect of conformational noise is an ensemble of protein network configurations, from which the most suitable can be explored in response to perturbations, conferring protein networks with remarkable flexibility and resilience. Moreover, the ubiquitous presence of IDPs as transcriptional factors and, more generally, as hubs in protein networks, is indicative of their role in propagation of transcriptional (genetic) noise. As effectors of transcriptional and conformational noise, IDPs rewire protein networks and unmask latent interactions in response to perturbations. Thus, noise-driven activation of latent pathways could underlie state-switching events such as cellular transformation in cancer. To test this hypothesis, we created a model of a protein network with the topological characteristics of a cancer protein network and tested its response to a perturbation in presence of IDP hubs and conformational noise. Because numerous IDPs are found to be epigenetic modifiers and chromatin remodelers, we hypothesize that they could further channel noise into stable, heritable genotypic changes.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.
Resumo:
Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.
Resumo:
This paper extends some geometric properties of a one-parameter family of relative entropies. These arise as redundancies when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the Kullback-Leibler divergence. They satisfy the Pythagorean property and behave like squared distances. This property, which was known for finite alphabet spaces, is now extended for general measure spaces. Existence of projections onto convex and certain closed sets is also established. Our results may have applications in the Rényi entropy maximization rule of statistical physics.
Resumo:
We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.