962 resultados para Average Entropy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP) in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m−2 K−1 of material entropy production is due to vertical heat transport and 5–7 mW m−2 K−1 to horizontal heat transport

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present an outlook on the climate system thermodynamics. First, we construct an equivalent Carnot engine with efficiency and frame the Lorenz energy cycle in a macroscale thermodynamic context. Then, by exploiting the second law, we prove that the lower bound to the entropy production is times the integrated absolute value of the internal entropy fluctuations. An exergetic interpretation is also proposed. Finally, the controversial maximum entropy production principle is reinterpreted as requiring the joint optimization of heat transport and mechanical work production. These results provide tools for climate change analysis and for climate models’ validation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper the authors exploit two equivalent formulations of the average rate of material entropy production in the climate system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. This approach is based only on 2D radiative fields at the surface and at the top of atmosphere. Using 2D fields at the top of atmosphere alone, lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle are derived. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamic processes, it is possible to gain insight into a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the Program for Climate Model Diagnosis and Intercomparison (PCMDI)/phase 3 of the Coupled Model Intercomparison Project (CMIP3) dataset in preindustrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large-scale meridional heat transport contributes to only about 10% of the total. This suggests that the traditional two-box models used for providing a minimal representation of entropy production in planetary systems are not appropriate, whereas a basic—but conceptually correct—description can be framed in terms of a four-box model. The total material entropy production is typically 55 mW m−2 K−1, with discrepancies on the order of 5%, and CMs’ baroclinic efficiencies are clustered around 0.055. The lower bounds on the intensity of the Lorenz energy cycle featured by CMs are found to be around 1.0–1.5 W m−2, which implies that the derived inequality is rather stringent. When looking at the variability and covariability of the considered thermodynamic quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain. The contributions to material entropy production from vertical and horizontal processes are positively correlated, so that no compensation mechanism seems in place. Quite consistently among CMs, the variability of the efficiency of the system is a better proxy for variability of the entropy production due to horizontal processes than that of the large-scale heat flux. The possibility of providing constraints on the 3D dynamics of the fluid envelope based only on 2D observations of radiative fluxes seems promising for the observational study of planets and for testing numerical models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a critical history of the concept of ‘structured deposition’. It examines the long-term development of this idea in archaeology, from its origins in the early 1980s through to the present day, looking at how it has been moulded and transformed. On the basis of this historical account, a number of problems are identified with the way that ‘structured deposition’ has generally been conceptualized and applied. It is suggested that the range of deposits described under a single banner as being ‘structured’ is unhelpfully broad, and that archaeologists have been too willing to view material culture patterning as intentionally produced – the result of symbolic or ritual action. It is also argued that the material signatures of ‘everyday’ practice have been undertheorized and all too often ignored. Ultimately, it is suggested that if we are ever to understand fully the archaeological signatures of past practice, it is vital to consider the ‘everyday’ as well as the ‘ritual’ processes which lie behind the patterns we uncover in the ground.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neutron diffraction at 11.4 and 295 K and solid-state 67Zn NMR are used to determine both the local and average structures in the disordered, negative thermal expansion (NTE) material, Zn(CN)2. Solid-state NMR not only confirms that there is head-to-tail disorder of the C≡N groups present in the solid, but yields information about the relative abundances of the different Zn(CN)4-n(NC)n tetrahedral species, which do not follow a simple binomial distribution. The Zn(CN)4 and Zn(NC)4 species occur with much lower probabilities than are predicted by binomial theory, supporting the conclusion that they are of higher energy than the other local arrangements. The lowest energy arrangement is Zn(CN)2(NC)2. The use of total neutron diffraction at 11.4 K, with analysis of both the Bragg diffraction and the derived total correlation function, yields the first experimental determination of the individual Zn−N and Zn−C bond lengths as 1.969(2) and 2.030(2) Å, respectively. The very small difference in bond lengths, of ~0.06 Å, means that it is impossible to obtain these bond lengths using Bragg diffraction in isolation. Total neutron diffraction also provides information on both the average and local atomic displacements responsible for NTE in Zn(CN)2. The principal motions giving rise to NTE are shown to be those in which the carbon and nitrogen atoms within individual Zn−C≡N−Zn linkages are displaced to the same side of the Zn···Zn axis. Displacements of the carbon and nitrogen atoms to opposite sides of the Zn···Zn axis, suggested previously in X-ray studies as being responsible for NTE behavior, in fact make negligible contribution at temperatures up to 295 K.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded computer systems equipped with wireless communication transceivers are nowadays used in a vast number of application scenarios. Energy consumption is important in many of these scenarios, as systems are battery operated and long maintenance-free operation is required. To achieve this goal, embedded systems employ low-power communication transceivers and protocols. However, currently used protocols cannot operate efficiently when communication channels are highly erroneous. In this study, we show how average diversity combining (ADC) can be used in state-of-the-art low-power communication protocols. This novel approach improves transmission reliability and in consequence energy consumption and transmission latency in the presence of erroneous channels. Using a testbed, we show that highly erroneous channels are indeed a common occurrence in situations, where low-power systems are used and we demonstrate that ADC improves low-power communication dramatically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The local speeds of object contours vary systematically with the cosine of the angle between the normal component of the local velocity and the global object motion direction. An array of Gabor elements whose speed changes with local spatial orientation in accordance with this pattern can appear to move as a single surface. The apparent direction of motion of plaids and Gabor arrays has variously been proposed to result from feature tracking, vector addition and vector averaging in addition to the geometrically correct global velocity as indicated by the intersection of constraints (IOC) solution. Here a new combination rule, the harmonic vector average (HVA), is introduced, as well as a new algorithm for computing the IOC solution. The vector sum can be discounted as an integration strategy as it increases with the number of elements. The vector average over local vectors that vary in direction always provides an underestimate of the true global speed. The HVA, however, provides the correct global speed and direction for an unbiased sample of local velocities with respect to the global motion direction, as is the case for a simple closed contour. The HVA over biased samples provides an aggregate velocity estimate that can still be combined through an IOC computation to give an accurate estimate of the global velocity, which is not true of the vector average. Psychophysical results for type II Gabor arrays show perceived direction and speed falls close to the IOC direction for Gabor arrays having a wide range of orientations but the IOC prediction fails as the mean orientation shifts away from the global motion direction and the orientation range narrows. In this case perceived velocity generally defaults to the HVA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The implications of polar cap expansions, contractions and movements for empirical models of high-latitude plasma convection are examined. Some of these models have been generated by directly averaging flow measurements from large numbers of satellite passes or radar scans; others have employed more complex means to combine data taken at different times into large-scale patterns of flow. In all cases, the models have implicitly adopted the assumption that the polar cap is in steady state: they have all characterized the ionospheric flow in terms of the prevailing conditions (e.g. the interplanetary magnetic field and/or some index of terrestrial magnetic activity) without allowance for their history. On long enough time scales, the polar cap is indeed in steady state but on time scales shorter than a few hours it is not and can oscillate in size and position. As a result, the method used to combine the data can influence the nature of the convection reversal boundary and the transpolar voltage in the derived model. This paper discusses a variety of effects due to time-dependence in relation to some ionospheric convection models which are widely applied. The effects are shown to be varied and to depend upon the procedure adopted to compile the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We give an a posteriori analysis of a semidiscrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics, which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (2014, SIAM J. Math. Anal., 46, 3518–3539). This framework allows energy-type arguments to be applied to continuous functions. Since we advocate the use of discontinuous Galerkin methods we make use of two families of reconstructions, one set of discrete reconstructions and a set of elliptic reconstructions to apply the reduced relative entropy framework in this setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.