932 resultados para multiscale entropy
Resumo:
Motivated by the viscosity bound in gauge/gravity duality, we consider the ratio of shear viscosity (eta) to entropy density (s) in black hole accretion flows. We use both an ideal gas equation of state and the QCD equation of state obtained from lattice for the fluid accreting onto a Kerr black hole. The QCD equation of state is considered since the temperature of accreting matter is expected to approach 10(12) K in certain hot flows. We find that in both the cases eta/s is small only for primordial black holes and several orders of magnitude larger than any known fluid for stellar and supermassive black holes. We show that a lower bound on the mass of primordial black holes leads to a lower bound on eta/s and vice versa. Finally we speculate that the Shakura-Sunyaev viscosity parameter should decrease with increasing density and/or temperatures. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We present global multidimensional numerical simulations of the plasma that pervades the dark matter haloes of clusters, groups and massive galaxies (the intracluster medium; ICM). Observations of clusters and groups imply that such haloes are roughly in global thermal equilibrium, with heating balancing cooling when averaged over sufficiently long time- and length-scales; the ICM is, however, very likely to be locally thermally unstable. Using simple observationally motivated heating prescriptions, we show that local thermal instability (TI) can produce a multiphase medium with similar to 104 K cold filaments condensing out of the hot ICM only when the ratio of the TI time-scale in the hot plasma (tTI) to the free-fall time-scale (tff) satisfies tTI/tff? 10. This criterion quantitatively explains why cold gas and star formation are preferentially observed in low-entropy clusters and groups. In addition, the interplay among heating, cooling and TI reduces the net cooling rate and the mass accretion rate at small radii by factors of similar to 100 relative to cooling-flow models. This dramatic reduction is in line with observations. The feedback efficiency required to prevent a cooling flow is similar to 10-3 for clusters and decreases for lower mass haloes; supernova heating may be energetically sufficient to balance cooling in galactic haloes. We further argue that the ICM self-adjusts so that tTI/tff? 10 at all radii. When this criterion is not satisfied, cold filaments condense out of the hot phase and reduce the density of the ICM. These cold filaments can power the black hole and/or stellar feedback required for global thermal balance, which drives tTI/tff? 10. In comparison to clusters, groups have central cores with lower densities and larger radii. This can account for the deviations from self-similarity in the X-ray luminositytemperature () relation. The high-velocity clouds observed in the Galactic halo can be due to local TI producing multiphase gas close to the virial radius if the density of the hot plasma in the Galactic halo is >rsim 10-5 cm-3 at large radii.
Resumo:
Multiwavelength data indicate that the X-ray-emitting plasma in the cores of galaxy clusters is not cooling catastrophically. To a large extent, cooling is offset by heating due to active galactic nuclei (AGNs) via jets. The cool-core clusters, with cooler/denser plasmas, show multiphase gas and signs of some cooling in their cores. These observations suggest that the cool core is locally thermally unstable while maintaining global thermal equilibrium. Using high-resolution, three-dimensional simulations we study the formation of multiphase gas in cluster cores heated by collimated bipolar AGN jets. Our key conclusion is that spatially extended multiphase filaments form only when the instantaneous ratio of the thermal instability and free-fall timescales (t(TI)/t(ff)) falls below a critical threshold of approximate to 10. When this happens, dense cold gas decouples from the hot intracluster medium (ICM) phase and generates inhomogeneous and spatially extended Ha filaments. These cold gas clumps and filaments ``rain'' down onto the central regions of the core, forming a cold rotating torus and in part feeding the supermassive black hole. Consequently, the self-regulated feedback enhances AGN heating and the core returns to a higher entropy level with t(TI)/t(ff) > 10. Eventually, the core reaches quasi-stable global thermal equilibrium, and cold filaments condense out of the hot ICM whenever t(TI)/t(ff) less than or similar to 10. This occurs despite the fact that the energy from AGN jets is supplied to the core in a highly anisotropic fashion. The effective spatial redistribution of heat is enabled in part by the turbulent motions in the wake of freely falling cold filaments. Increased AGN activity can locally reverse the cold gas flow, launching cold filamentary gas away from the cluster center. Our criterion for the condensation of spatially extended cold gas is in agreement with observations and previous idealized simulations.
Resumo:
We consider counterterms for odd dimensional holographic conformal field theories (CFTs). These counterterms are derived by demanding cutoff independence of the CFT partition function on S-d and S-1 x Sd-1. The same choice of counterterms leads to a cutoff independent Schwarzschild black hole entropy. When treated as independent actions, these counterterm actions resemble critical theories of gravity, i.e., higher curvature gravity theories where the additional massive spin-2 modes become massless. Equivalently, in the context of AdS/CFT, these are theories where at least one of the central charges associated with the trace anomaly vanishes. Connections between these theories and logarithmic CFTs are discussed. For a specific choice of parameters, the theories arising from counterterms are nondynamical and resemble a Dirac-Born-Infeld generalization of gravity. For even dimensional CFTs, analogous counterterms cancel log-independent cutoff dependence.
Resumo:
Urbanisation is a dynamic complex phenomenon involving large scale changes in the land uses at local levels. Analyses of changes in land uses in urban environments provide a historical perspective of land use and give an opportunity to assess the spatial patterns, correlation, trends, rate and impacts of the change, which would help in better regional planning and good governance of the region. Main objective of this research is to quantify the urban dynamics using temporal remote sensing data with the help of well-established landscape metrics. Bangalore being one of the rapidly urbanising landscapes in India has been chosen for this investigation. Complex process of urban sprawl was modelled using spatio temporal analysis. Land use analyses show 584% growth in built-up area during the last four decades with the decline of vegetation by 66% and water bodies by 74%. Analyses of the temporal data reveals an increase in urban built up area of 342.83% (during 1973-1992), 129.56% (during 1992-1999), 106.7% (1999-2002), 114.51% (2002-2006) and 126.19% from 2006 to 2010. The Study area was divided into four zones and each zone is further divided into 17 concentric circles of 1 km incrementing radius to understand the patterns and extent of the urbanisation at local levels. The urban density gradient illustrates radial pattern of urbanisation for the period 1973-2010. Bangalore grew radially from 1973 to 2010 indicating that the urbanisation is intensifying from the central core and has reached the periphery of the Greater Bangalore. Shannon's entropy, alpha and beta population densities were computed to understand the level of urbanisation at local levels. Shannon's entropy values of recent time confirms dispersed haphazard urban growth in the city, particularly in the outskirts of the city. This also illustrates the extent of influence of drivers of urbanisation in various directions. Landscape metrics provided in depth knowledge about the sprawl. Principal component analysis helped in prioritizing the metrics for detailed analyses. The results clearly indicates that whole landscape is aggregating to a large patch in 2010 as compared to earlier years which was dominated by several small patches. The large scale conversion of small patches to large single patch can be seen from 2006 to 2010. In the year 2010 patches are maximally aggregated indicating that the city is becoming more compact and more urbanised in recent years. Bangalore was the most sought after destination for its climatic condition and the availability of various facilities (land availability, economy, political factors) compared to other cities. The growth into a single urban patch can be attributed to rapid urbanisation coupled with the industrialisation. Monitoring of growth through landscape metrics helps to maintain and manage the natural resources. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Artificial viscosity in SPH-based computations of impact dynamics is a numerical artifice that helps stabilize spurious oscillations near the shock fronts and requires certain user-defined parameters. Improper choice of these parameters may lead to spurious entropy generation within the discretized system and make it over-dissipative. This is of particular concern in impact mechanics problems wherein the transient structural response may depend sensitively on the transfer of momentum and kinetic energy due to impact. In order to address this difficulty, an acceleration correction algorithm was proposed in Shaw and Reid (''Heuristic acceleration correction algorithm for use in SPH computations in impact mechanics'', Comput. Methods Appl. Mech. Engrg., 198, 3962-3974) and further rationalized in Shaw et al. (An Optimally Corrected Form of Acceleration Correction Algorithm within SPH-based Simulations of Solid Mechanics, submitted to Comput. Methods Appl. Mech. Engrg). It was shown that the acceleration correction algorithm removes spurious high frequency oscillations in the computed response whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. In this paper, we aim at gathering further insights into the acceleration correction algorithm by further exploring its application to problems related to impact dynamics. The numerical evidence in this work thus establishes that, together with the acceleration correction algorithm, SPH can be used as an accurate and efficient tool in dynamic, inelastic structural mechanics. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The Adam-Gibbs relation between relaxation times and the configurational entropy has been tested extensively for glass formers using experimental data and computer simulation results. Although the form of the relation contains no dependence on the spatial dimensionality in the original formulation, subsequent derivations of the Adam-Gibbs relation allow for such a possibility. We test the Adam-Gibbs relation in two, three, and four spatial dimensions using computer simulations of model glass formers. We find that the relation is valid in three and four dimensions. But in two dimensions, the relation does not hold, and interestingly, no single alternate relation describes the results for the different model systems we study.
Resumo:
The structure of the hydrogen bond network is a key element for understanding water's thermodynamic and kinetic anomalies. While ambient water is strongly believed to be a uniform, continuous hydrogen-bonded liquid, there is growing consensus that supercooled water is better described in terms of distinct domains with either a low-density ice-like structure or a high-density disordered one. We evidenced two distinct rotational mobilities of probe molecules in interstitial supercooled water of polycrystalline ice Banerjee D, et al. (2009) ESR evidence for 2 coexisting liquid phases in deeply supercooled bulk water. Proc Natl Acad Sci USA 106: 11448-11453]. Here we show that, by increasing the confinement of interstitial water, the mobility of probe molecules, surprisingly, increases. We argue that loose confinement allows the presence of ice-like regions in supercooled water, whereas a tighter confinement yields the suppression of this ordered fraction and leads to higher fluidity. Compelling evidence of the presence of ice-like regions is provided by the probe orientational entropy barrier which is set, through hydrogen bonding, by the configuration of the surrounding water molecules and yields a direct measure of the configurational entropy of the same. We find that, under loose confinement of supercooled water, the entropy barrier surmounted by the slower probe fraction exceeds that of equilibrium water by the melting entropy of ice, whereas no increase of the barrier is observed under stronger confinement. The lower limit of metastability of supercooled water is discussed.
Resumo:
Systematic measurements pertinent to the magnetocaloric effect and nature of magnetic transition around the transition temperature are performed in the 10 nm Pr0.5Ca0.5MnO3 nanoparticles (PCMO10). Maxwell's relation is employed to estimate the change in magnetic entropy. At Curie temperature (T-C) similar to 83.5 K, the change in magnetic entropy (-Delta S-M) discloses a typical variation with a value 0.57 J/kg K, and is found to be magnetic field dependent. From the area under the curve (Delta S vs T), the refrigeration capacity is calculated at T-C similar to 83.5K and it is found to be 7.01 J/kg. Arrott plots infer that due to the competition between the ferromagnetic and anti-ferromagnetic interactions, the magnetic phase transition in PCMO10 is broadly spread over both in temperature as well as magnetic field coordinates. Upon tuning the particle size, size distribution, morphology, and relative fraction of magnetic phases, it may be possible to enhance the magnetocalorific effect further in PCMO10. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4759372]
Resumo:
Phase equilibria in the system Tm-Rh-O at 1200 K is established by isothermal equilibration of selected compositions and phase identification after quenching to room temperature. Six intermetallic phases (Tm3Rh, Tm7Rh3, Tm5Rh3, Tm3Rh2, TmRh, TmRh2 +/-delta) and a ternary oxide TmRhO3 are identified. Based on experimentally determined phase relations, a solid-state electrochemical cell is devised to measure the standard free energy of formation of orthorhombic perovskite TmRhO3 from cubic Tm2O3 and beta-Rh2O3 in the temperature range from (900 to 1300) K. The results can be summarized as: Delta G(f,ox)(o) +/- 104/J.mol(-1) = -46474 + 3.925(T/K). Invoking the Neumann-Kopp rule, the standard enthalpy of formation of TmRhO3 from its constituent elements at 298.15 K is estimated as -1193.89 (+/- 2.86) kJ.mol(-1). The standard entropy of TmRhO3 at 298.15 K is evaluated as 103.8 (+/- 1.6) J.mol(-1).K-1. The oxygen potential-composition diagram and three-dimensional chemical potential diagram at 1200 K and temperature-composition diagrams at constant partial pressures of oxygen are computed from thermodynamic data. The compound TmRhO3 decomposes at 1688 (+/- 2) K in pure oxygen and at 1583 (+/- 2) K in air at standard pressure.
Resumo:
We investigate the effect of bilayer melting transition on thermodynamics and dynamics of interfacial water using molecular dynamics simulation with the two-phase thermodynamic model. We show that the diffusivity of interface water depicts a dynamic crossover at the chain melting transition following an Arrhenius behavior until the transition temperature. The corresponding change in the diffusion coefficient from the bulk to the interface water is comparable with experimental observations found recently for water near 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) vesicles Phys. Chem. Chem. Phys. 13, 7732 (2011)]. The entropy and potential energy of interfacial water show distinct changes at the bilayer melting transition, indicating a strong correlation in the thermodynamic state of water and the accompanying first-order phase transition of the bilayer membrane. DOI: 10.1103/PhysRevLett.110.018303
Resumo:
The q-Gaussian distribution results from maximizing certain generalizations of Shannon entropy under some constraints. The importance of q-Gaussian distributions stems from the fact that they exhibit power-law behavior, and also generalize Gaussian distributions. In this paper, we propose a Smoothed Functional (SF) scheme for gradient estimation using q-Gaussian distribution, and also propose an algorithm for optimization based on the above scheme. Convergence results of the algorithm are presented. Performance of the proposed algorithm is shown by simulation results on a queuing model.
Resumo:
Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.
Resumo:
Data Prefetchers identify and make use of any regularity present in the history/training stream to predict future references and prefetch them into the cache. The training information used is typically the primary misses seen at a particular cache level, which is a filtered version of the accesses seen by the cache. In this work we demonstrate that extending the training information to include secondary misses and hits along with primary misses helps improve the performance of prefetchers. In addition to empirical evaluation, we use the information theoretic metric entropy, to quantify the regularity present in extended histories. Entropy measurements indicate that extended histories are more regular than the default primary miss only training stream. Entropy measurements also help corroborate our empirical findings. With extended histories, further benefits can be achieved by triggering prefetches during secondary misses also. In this paper we explore the design space of extended prefetch histories and alternative prefetch trigger points for delta correlation prefetchers. We observe that different prefetch schemes benefit to a different extent with extended histories and alternative trigger points. Also the best performing design point varies on a per-benchmark basis. To meet these requirements, we propose a simple adaptive scheme that identifies the best performing design point for a benchmark-prefetcher combination at runtime. In SPEC2000 benchmarks, using all the L2 accesses as history for prefetcher improves the performance in terms of both IPC and misses reduced over techniques that use only primary misses as history. The adaptive scheme improves the performance of CZone prefetcher over Baseline by 4.6% on an average. These performance gains are accompanied by a moderate reduction in the memory traffic requirements.
Resumo:
In this paper, we consider a distributed function computation setting, where there are m distributed but correlated sources X1,...,Xm and a receiver interested in computing an s-dimensional subspace generated by [X1,...,Xm]Γ for some (m × s) matrix Γ of rank s. We construct a scheme based on nested linear codes and characterize the achievable rates obtained using the scheme. The proposed nested-linear-code approach performs at least as well as the Slepian-Wolf scheme in terms of sum-rate performance for all subspaces and source distributions. In addition, for a large class of distributions and subspaces, the scheme improves upon the Slepian-Wolf approach. The nested-linear-code scheme may be viewed as uniting under a common framework, both the Korner-Marton approach of using a common linear encoder as well as the Slepian-Wolf approach of employing different encoders at each source. Along the way, we prove an interesting and fundamental structural result on the nature of subspaces of an m-dimensional vector space V with respect to a normalized measure of entropy. Here, each element in V corresponds to a distinct linear combination of a set {Xi}im=1 of m random variables whose joint probability distribution function is given.