952 resultados para 240503 Thermodynamics and Statistical Physics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are two principal chemical concepts that are important for studying the naturalenvironment. The first one is thermodynamics, which describes whether a system is atequilibrium or can spontaneously change by chemical reactions. The second main conceptis how fast chemical reactions (kinetics or rate of chemical change) take place wheneverthey start. In this work we examine a natural system in which both thermodynamics andkinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 insuperficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system inwhich natural and antrophic effects both contribute to highly modify the chemical compositionof water. Thermodynamical modelling based on the reduction-oxidation reactionsinvolving the passage NH+4 -& NO−2 -& NO−3 in equilibrium conditions has allowed todetermine the Eh redox potential values able to characterise the state of each sample and,consequently, of the fluid environment from which it was drawn. Just as pH expressesthe concentration of H+ in solution, redox potential is used to express the tendency of anenvironment to receive or supply electrons. In this context, oxic environments, as thoseof river systems, are said to have a high redox potential because O2 is available as anelectron acceptor.Principles of thermodynamics and chemical kinetics allow to obtain a model that oftendoes not completely describe the reality of natural systems. Chemical reactions may indeedfail to achieve equilibrium because the products escape from the site of the rectionor because reactions involving the trasformation are very slow, so that non-equilibriumconditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understoodcatalytic effects or to surface effects, while variables as concentration (a largenumber of chemical species can coexist and interact concurrently), temperature and pressurecan have large gradients in natural systems. By taking into account this, data of 91water samples have been modelled by using statistical methodologies for compositionaldata. The application of log–contrast analysis has allowed to obtain statistical parametersto be correlated with the calculated Eh values. In this way, natural conditions in whichchemical equilibrium is hypothesised, as well as underlying fast reactions, are comparedwith those described by a stochastic approach

Relevância:

100.00% 100.00%

Publicador:

Resumo:

rg model with A3 potential. The holographically dual field theories provide the description of the microscopic degrees of freedom which underlie all of the thermodynamics, as can be seen by examining the form of the microscopic fluctuations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Department of Physics, Cochin University of Science and Technology

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are two principal chemical concepts that are important for studying the natural environment. The first one is thermodynamics, which describes whether a system is at equilibrium or can spontaneously change by chemical reactions. The second main concept is how fast chemical reactions (kinetics or rate of chemical change) take place whenever they start. In this work we examine a natural system in which both thermodynamics and kinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 in superficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system in which natural and antrophic effects both contribute to highly modify the chemical composition of water. Thermodynamical modelling based on the reduction-oxidation reactions involving the passage NH+4 -> NO−2 -> NO−3 in equilibrium conditions has allowed to determine the Eh redox potential values able to characterise the state of each sample and, consequently, of the fluid environment from which it was drawn. Just as pH expresses the concentration of H+ in solution, redox potential is used to express the tendency of an environment to receive or supply electrons. In this context, oxic environments, as those of river systems, are said to have a high redox potential because O2 is available as an electron acceptor. Principles of thermodynamics and chemical kinetics allow to obtain a model that often does not completely describe the reality of natural systems. Chemical reactions may indeed fail to achieve equilibrium because the products escape from the site of the rection or because reactions involving the trasformation are very slow, so that non-equilibrium conditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understood catalytic effects or to surface effects, while variables as concentration (a large number of chemical species can coexist and interact concurrently), temperature and pressure can have large gradients in natural systems. By taking into account this, data of 91 water samples have been modelled by using statistical methodologies for compositional data. The application of log–contrast analysis has allowed to obtain statistical parameters to be correlated with the calculated Eh values. In this way, natural conditions in which chemical equilibrium is hypothesised, as well as underlying fast reactions, are compared with those described by a stochastic approach

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the initialization of Northern-hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates significantly reduces assimilation error both in identical-twin experiments and when assimilating sea-ice observations, reducing the concentration error by a factor of four to six, and the thickness error by a factor of two. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that the strong dependence of thermodynamic ice growth on ice concentration necessitates an adjustment of mean ice thickness in the analysis update. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that proportional mean-thickness updates are superior to the other two methods considered and enable us to assimilate sea ice in a global climate model using simple Newtonian relaxation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the initialisation of Northern Hemisphere sea ice in the global climate model ECHAM5/MPI-OM by assimilating sea-ice concentration data. The analysis updates for concentration are given by Newtonian relaxation, and we discuss different ways of specifying the analysis updates for mean thickness. Because the conservation of mean ice thickness or actual ice thickness in the analysis updates leads to poor assimilation performance, we introduce a proportional dependence between concentration and mean thickness analysis updates. Assimilation with these proportional mean-thickness analysis updates leads to good assimilation performance for sea-ice concentration and thickness, both in identical-twin experiments and when assimilating sea-ice observations. The simulation of other Arctic surface fields in the coupled model is, however, not significantly improved by the assimilation. To understand the physical aspects of assimilation errors, we construct a simple prognostic model of the sea-ice thermodynamics, and analyse its response to the assimilation. We find that an adjustment of mean ice thickness in the analysis update is essential to arrive at plausible state estimates. To understand the statistical aspects of assimilation errors, we study the model background error covariance between ice concentration and ice thickness. We find that the spatial structure of covariances is best represented by the proportional mean-thickness analysis updates. Both physical and statistical evidence supports the experimental finding that assimilation with proportional mean-thickness updates outperforms the other two methods considered. The method described here is very simple to implement, and gives results that are sufficiently good to be used for initialising sea ice in a global climate model for seasonal to decadal predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A nonvanishing cosmological term in Einstein's equations implies a nonvanishing spacetime curvature even in the absence of any kind of matter. It would, in consequence, affect many of the underlying kinematic tenets of physical theory. The usual commutative spacetime translations of the Poincare group would be replaced by the mixed conformal translations of the de Sitter group, leading to obvious alterations in elementary concepts such as time, energy and momentum. Although negligible at small scales, such modifications may come to have important consequences both in the large and for the inflationary picture of the early Universe. A qualitative discussion is presented, which suggests deep changes in Hamiltonian, Quantum and Statistical Mechanics. In the primeval universe as described by the standard cosmological model, in particular, the equations of state of the matter sources could be quite different from those usually introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a nonequilibrium phase transition which is not allowed in the model. We propose an excitable-wave mean-field approximation which shows good agreement with previously published simulation results [Gollo et al., PLoS Comput. Biol. 5, e1000402 (2009)] and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Directionality in populations of replicating organisms can be parametrized in terms of a statistical concept: evolutionary entropy. This parameter, a measure of the variability in the age of reproducing individuals in a population, is isometric with the macroscopic variable body size. Evolutionary trends in entropy due to mutation and natural selection fall into patterns modulated by ecological and demographic constraints, which are delineated as follows: (i) density-dependent conditions (a unidirectional increase in evolutionary entropy), and (ii) density-independent conditions, (a) slow exponential growth (an increase in entropy); (b) rapid exponential growth, low degree of iteroparity (a decrease in entropy); and (c) rapid exponential growth, high degree of iteroparity (random, nondirectional change in entropy). Directionality in aggregates of inanimate matter can be parametrized in terms of the statistical concept, thermodynamic entropy, a measure of disorder. Directional trends in entropy in aggregates of matter fall into patterns determined by the nature of the adiabatic constraints, which are characterized as follows: (i) irreversible processes (an increase in thermodynamic entropy) and (ii) reversible processes (a constant value for entropy). This article analyzes the relation between the concepts that underlie the directionality principles in evolutionary biology and physical systems. For models of cellular populations, an analytic relation is derived between generation time, the average length of the cell cycle, and temperature. This correspondence between generation time, an evolutionary parameter, and temperature, a thermodynamic variable, is exploited to show that the increase in evolutionary entropy that characterizes population processes under density-dependent conditions represents a nonequilibrium analogue of the second law of thermodynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low-density parity-check codes with irregular constructions have recently been shown to outperform the most advanced error-correcting codes to date. In this paper we apply methods of statistical physics to study the typical properties of simple irregular codes. We use the replica method to find a phase transition which coincides with Shannon's coding bound when appropriate parameters are chosen. The decoding by belief propagation is also studied using statistical physics arguments; the theoretical solutions obtained are in good agreement with simulation results. We compare the performance of irregular codes with that of regular codes and discuss the factors that contribute to the improvement in performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we review recent theoretical approaches for analysing the dynamics of on-line learning in multilayer neural networks using methods adopted from statistical physics. The analysis is based on monitoring a set of macroscopic variables from which the generalisation error can be calculated. A closed set of dynamical equations for the macroscopic variables is derived analytically and solved numerically. The theoretical framework is then employed for defining optimal learning parameters and for analysing the incorporation of second order information into the learning process using natural gradient descent and matrix-momentum based methods. We will also briefly explain an extension of the original framework for analysing the case where training examples are sampled with repetition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a method based on the magnetization enumerator to determine the critical noise level for Gallager type low density parity check error correcting codes (LDPC). Our method provides an appealingly simple interpretation to the relation between different decoding schemes, and provides more optimistic critical noise levels than those reported in the information theory literature.