95 resultados para Equilibrium measure


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The “cotton issue” has been a topic of several academic discussions for trade policy analysts. However the design of trade and agricultural policy in the EU and the USA has become a politically sensitive matter throughout the last five years. This study utilizing the Agricultural Trade Policy Simulation Model (ATPSM) aims to gain insights into the global cotton market, to explain why domestic support for cotton has become an issue, to quantify the impact of the new EU agricultural policy on the cotton sector, and to measure the effect of eliminating support policies on production and trade. Results indicate that full trade liberalization would lead the four West African countries to better terms of trade with the EU. If tariff reduction follows the so-called Swiss formula, world prices would increase by 3.5%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Laboratory determined mineral weathering rates need to be normalised to allow their extrapolation to natural systems. The principle normalisation terms used in the literature are mass, and geometric- and BET specific surface area (SSA). The purpose of this study was to determine how dissolution rates normalised to these terms vary with grain size. Different size fractions of anorthite and biotite ranging from 180-150 to 20-10 mu m were dissolved in pH 3, HCl at 25 degrees C in flow through reactors under far from equilibrium conditions. Steady state dissolution rates after 5376 h (anorthite) and 4992 h (biotite) were calculated from Si concentrations and were normalised to initial- and final- mass and geometric-, geometric edge- (biotite), and BET SSA. For anorthite, rates normalised to initial- and final-BET SSA ranged from 0.33 to 2.77 X 10(-10) mol(feldspar) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 5.74 to 8.88 X 10(-10) mol(feldspar) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.11 to 1.65 mol(feldspar) g(-1) s(-1). For biotite, rates normalised to initial- and final-BET SSA ranged from 1.02 to 2.03 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric SSA ranged from 3.26 to 16.21 X 10(-12) mol(biotite) m(-2) s(-1), rates normalised to initial- and final-geometric edge SSA ranged from 59.46 to 111.32 x 10(-12) mol(biotite) m(-2) s(-1) and rates normalised to initial- and final-mass ranged from 0.81 to 6.93 X 10(-12) mol(biotite) g(-1) s(-1). For all normalising terms rates varied significantly (p <= 0.05) with grain size. The normalising terms which gave least variation in dissolution rate between grain sizes for anorthite were initial BET SSA and initial- and final-geometric SSA. This is consistent with: (1) dissolution being dominated by the slower dissolving but area dominant non-etched surfaces of the grains and, (2) the walls of etch pits and other dissolution features being relatively unreactive. These steady state normalised dissolution rates are likely to be constant with time. Normalisation to final BET SSA did not give constant ratios across grain size due to a non-uniform distribution of dissolution features. After dissolution coarser grains had a greater density of dissolution features with BET-measurable but unreactive wall surface area than the finer grains. The normalising term which gave the least variation in dissolution rates between grain sizes for biotite was initial BET SSA. Initial- and final-geometric edge SSA and final BET SSA gave the next least varied rates. The basal surfaces dissolved sufficiently rapidly to influence bulk dissolution rate and prevent geometric edge SSA normalised dissolution rates showing the least variation. Simple modelling indicated that biotite grain edges dissolved 71-132 times faster than basal surfaces. In this experiment, initial BET SSA best integrated the different areas and reactivities of the edge and basal surfaces of biotite. Steady state dissolution rates are likely to vary with time as dissolution alters the ratio of edge to basal surface area. Therefore they would be more properly termed pseudo-steady state rates, only appearing constant because the time period over which they were measured (1512 h) was less than the time period over wich they would change significantly. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although numerous field studies have evaluated flow and transport processes in salt marsh channels, the overall role of channels in delivering and removing material from salt marsh platforms is still poorly characterised. In this paper, we consider this issue based on a numerical hydrodynamic model for a prototype marsh system and on a field survey of the cross-sectional geometry of a marsh channel network. Results of the numerical simulations indicate that the channel transfers approximately three times the volume of water that would be estimated from mass balance considerations alone. Marsh platform roughness exerts a significant influence on the partitioning of discharge between the channel and the marsh platform edge, alters flow patterns on the marsh platform due to its effects on channel-to-platform transfer and also controls the timing of peak discharge relative to marsh-edge overtopping. Although peak channel discharges and velocities are associated with the flood tide and marsh inundation, a larger volume of water is transferred by the channel during ebb flows, a portion of which transfer takes place after the tidal height is below the marsh platform. Detailed surveys of the marsh channels crossing a series of transects at Upper Stiffkey Marsh, north Norfolk, England, show that the total channel cross-sectional area increases linearly with catchment area in the inner part of the marsh, which is consistent with the increase in shoreward tidal prism removed by the channels. Toward the marsh edge, however, a deficit in the total cross-sectional area develops, suggesting that discharge partitioning between the marsh channels and the marsh platform edge may also be expressed in the morphology of marsh channel systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The validity of convective parametrization breaks down at the resolution of mesoscale models, and the success of parametrized versus explicit treatments of convection is likely to depend on the large-scale environment. In this paper we examine the hypothesis that a key feature determining the sensitivity to the environment is whether the forcing of convection is sufficiently homogeneous and slowly varying that the convection can be considered to be in equilibrium. Two case studies of mesoscale convective systems over the UK, one where equilibrium conditions are expected and one where equilibrium is unlikely, are simulated using a mesoscale forecasting model. The time evolution of area-average convective available potential energy and the time evolution and magnitude of the timescale of convective adjustment are consistent with the hypothesis of equilibrium for case 1 and non-equilibrium for case 2. For each case, three experiments are performed with different partitionings between parametrized and explicit convection: fully parametrized convection, fully explicit convection and a simulation with significant amounts of both. In the equilibrium case, bulk properties of the convection such as area-integrated rain rates are insensitive to the treatment of convection. However, the detailed structure of the precipitation field changes; the simulation with parametrized convection behaves well and produces a smooth field that follows the forcing region, and the simulation with explicit convection has a small number of localized intense regions of precipitation that track with the mid-levelflow. For the non-equilibrium case, bulk properties of the convection such as area-integrated rain rates are sensitive to the treatment of convection. The simulation with explicit convection behaves similarly to the equilibrium case with a few localized precipitation regions. In contrast, the cumulus parametrization fails dramatically and develops intense propagating bows of precipitation that were not observed. The simulations with both parametrized and explicit convection follow the pattern seen in the other experiments, with a transition over the duration of the run from parametrized to explicit precipitation. The impact of convection on the large-scaleflow, as measured by upper-level wind and potential-vorticity perturbations, is very sensitive to the partitioning of convection for both cases. © Royal Meteorological Society, 2006. Contributions by P. A. Clark and M. E. B. Gray are Crown Copyright.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The equilibrium structure of HCN has been determined from the previously published ground state rotational constants of eight isotopomers by using (B0‐Be) values obtained from a variational calculation of the vibration–rotation spectrum. The results are re(CH)=1.065 01(8) Å, and re(CN)=1.153 24(2) Å.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Infrared spectra of the two stretching fundamentals of both HBS and DBS have been observed, using a continuous flow system through a multiple reflection long path cell at a pressure around 1 Torr and a Nicolet Fourier Transform spectrometer with a resolution of about 0•1 cm-1. The v3 BS stretching fundamental of DBS, near 1140 cm-1, is observed in strong Fermi resonance with the overtone of the bend 2v2. The bending fundamental v2 has not been observed and must be a very weak band. The analysis of the results in conjunction with earlier work gives the equilibrium structure (re(BH) = 1•1698(12) , re(BS) = 1•5978(3) ) and the harmonic and anharmonic force field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The equilibrium rotational constants Be of HCCF and DCCF have been determined from the ground state rotational constants B0, by determining the αr constants for all five fundamentals from the high-resolution vibrational—rotation spectrum making appropriate corrections for the effects of Fermi resonance. By combination with results from the 13C isotopomers and the recent ab initio calculations by Botschwina (Chem. Phys. Lett., 209 (1993) 117), the equilibrium structure is deduced to be: re(CH) = 1.0555(15) Å, re(CC) = 1.1955(8) Å and re(CF) = 1.2781(8) Å.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impacts of climate change on crop productivity are often assessed using simulations from a numerical climate model as an input to a crop simulation model. The precision of these predictions reflects the uncertainty in both models. We examined how uncertainty in a climate (HadAM3) and crop General Large-Area Model (GLAM) for annual crops model affects the mean and standard deviation of crop yield simulations in present and doubled carbon dioxide (CO2) climates by perturbation of parameters in each model. The climate sensitivity parameter (λ, the equilibrium response of global mean surface temperature to doubled CO2) was used to define the control climate. Observed 1966–1989 mean yields of groundnut (Arachis hypogaea L.) in India were simulated well by the crop model using the control climate and climates with values of λ near the control value. The simulations were used to measure the contribution to uncertainty of key crop and climate model parameters. The standard deviation of yield was more affected by perturbation of climate parameters than crop model parameters in both the present-day and doubled CO2 climates. Climate uncertainty was higher in the doubled CO2 climate than in the present-day climate. Crop transpiration efficiency was key to crop model uncertainty in both present-day and doubled CO2 climates. The response of crop development to mean temperature contributed little uncertainty in the present-day simulations but was among the largest contributors under doubled CO2. The ensemble methods used here to quantify physical and biological uncertainty offer a method to improve model estimates of the impacts of climate change.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The impacts of climate change on crop productivity are often assessed using simulations from a numerical climate model as an input to a crop simulation model. The precision of these predictions reflects the uncertainty in both models. We examined how uncertainty in a climate (HadAM3) and crop General Large-Area Model (GLAM) for annual crops model affects the mean and standard deviation of crop yield simulations in present and doubled carbon dioxide (CO2) climates by perturbation of parameters in each model. The climate sensitivity parameter (lambda, the equilibrium response of global mean surface temperature to doubled CO2) was used to define the control climate. Observed 1966-1989 mean yields of groundnut (Arachis hypogaea L.) in India were simulated well by the crop model using the control climate and climates with values of lambda near the control value. The simulations were used to measure the contribution to uncertainty of key crop and climate model parameters. The standard deviation of yield was more affected by perturbation of climate parameters than crop model parameters in both the present-day and doubled CO2 climates. Climate uncertainty was higher in the doubled CO2 climate than in the present-day climate. Crop transpiration efficiency was key to crop model uncertainty in both present-day and doubled CO2 climates. The response of crop development to mean temperature contributed little uncertainty in the present-day simulations but was among the largest contributors under doubled CO2. The ensemble methods used here to quantify physical and biological uncertainty offer a method to improve model estimates of the impacts of climate change.