831 resultados para Equilibrium measure


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider an equilibrium birth and death type process for a particle system in infinite volume, the latter is described by the space of all locally finite point configurations on Rd. These Glauber type dynamics are Markov processes constructed for pre-given reversible measures. A representation for the ``carré du champ'' and ``second carré du champ'' for the associate infinitesimal generators L are calculated in infinite volume and for a large class of functions in a generalized sense. The corresponding coercivity identity is derived and explicit sufficient conditions for the appearance and bounds for the size of the spectral gap of L are given. These techniques are applied to Glauber dynamics associated to Gibbs measure and conditions are derived extending all previous known results and, in particular, potentials with negative parts can now be treated. The high temperature regime is extended essentially and potentials with non-trivial negative part can be included. Furthermore, a special class of potentials is defined for which the size of the spectral gap is as least as large as for the free system and, surprisingly, the spectral gap is independent of the activity. This type of potentials should not show any phase transition for a given temperature at any activity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The external environment is characterized by periods of relative stability interspersed with periods of extreme change, implying that high performing firms must practice exploration and exploitation in order to survive and thrive. In this paper, we posit that R&D expenditure volatility indicates the presence of proactive R&D management, and is evidence of a firm moving from exploitation to exploration over time. This is consistent with a punctuated equilibrium model of R&D investment where shocks are induced by reactions to external turbulence. Using an unbalanced panel of almost 11,000 firm-years from 1997 to 2006, we show that greater fluctuations in the firm's R&D expenditure over time are associated with higher firm growth. Developing a contextual view of the relationship between R&D expenditure volatility and firm growth, we find that this relationship is weaker among firms with higher levels of corporate diversification and negative among smaller firms and those in slow clockspeed industries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present N-body simulations of accretion discs about young stellar objects (YSOs). The simulation includes the presence of a magnetic loop structure on the central star which interacts with the particles by means of a magnetic drag force. We find that an equilibrium spin rate is achieved when the corotation radius coincides with the edge of the loop. This spin rate is consistent with observed values for TTauri stars, being an order of magnitude less than the breakup value. The material ejected from the system by the rotating loop has properties consistent with the observed molecular outflows, given the presence of a suitable containing cavity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The development of marketing strategies optimally adjusted to export markets has been a vitally important topic for both managers and academics for about five decades. However, there is no agreement in the literature about which elements integrate marketing strategy and which components of domestic strategies should be adapted to export markets. The purpose of this paper is to develop a new scale – STRATADAPT. Design/methodology/approach – Results from a sample of small and medium-sized industrial exporting firms support a four-dimensional scale – product, promotion, price, and distribution strategies – of 30 items. The scale presents evidence of composite reliability as well as discriminant and nomological validity. Findings – Findings reveal that all four dimensions of marketing strategy adaptation are positively associated with the amount of the firm's financial resources allocated to export activity. Practical implications – The STRATADAPT scale may assist managers in developing better international marketing strategies as well as in planning more accurate and efficient marketing programs across markets. Originality/value – This study develops a new scale, the STRATADAPT scale, which is a broad measure of export marketing strategy adaptation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To examine the long-term stability of Arctic and Antarctic sea ice, idealized simulations are carried out with the climate model ECHAM5/MPIOM. Atmospheric CO2 concentration is increased over 2000 years from pre-industrial levels to quadrupling, is then kept constant for 5940 years, is afterwards decreased over 2000 years to pre-industrial levels, and finally kept constant for 3940 years.Despite these very slow changes, the sea-ice response significantly lags behind the CO2 concentration change. This lag, which is caused by the ocean’s thermal inertia, implies that the sea-ice equilibrium response to increasing CO2 concentration is substantially underestimated by transient simulations. The sea-ice response to CO2 concentration change is not truly hysteretic and in principle reversible.We find no lag in the evolution of Arctic sea ice relative to changes in annual-mean northern-hemisphere surface temperature. The summer sea-ice cover changes linearly with respect to both CO2 concentration and temper...

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Changes of the equilibrium-line altitude (ELA) since the end of the Little Ice Age (LIA) in eastern Nepal have been studied using glacier inventory data. The toe-to-headwall altitude ratios (THARs) for individual glaciers were calculated for 1992, and used to estimate the ELA in 1959 and at the end of the LIA. THAR for debris-free glaciers is found to be smaller than for debris-covered glaciers. The ELAs for debris-covered glaciers are higher than those for debris-free glaciers in eastern Nepal. There is considerable variation in the reconstructed change in ELA (ΔELA) between glaciers within specific regions and between regions. This is not related to climate gradients, but results from differences in glacier aspect: southeast- and south-facing glaciers show larger ΔELAs in eastern Nepal than north- or west-facing glaciers. The data suggest that the rate of ELA rise may have accelerated in the last few decades. The limited number of climate records from Nepal, and analyses using a simple ELA–climate model, suggest that the higher rate of the ΔELA between 1959 and 1992 is a result of increased warming that occurred after the 1970s at higher altitudes in Nepal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate sensitivity is defined as the change in global mean equilibrium temperature after a doubling of atmospheric CO2 concentration and provides a simple measure of global warming. An early estimate of climate sensitivity, 1.5—4.5°C, has changed little subsequently, including the latest assessment by the Intergovernmental Panel on Climate Change. The persistence of such large uncertainties in this simple measure casts doubt on our understanding of the mechanisms of climate change and our ability to predict the response of the climate system to future perturbations. This has motivated continued attempts to constrain the range with climate data, alone or in conjunction with models. The majority of studies use data from the instrumental period (post-1850), but recent work has made use of information about the large climate changes experienced in the geological past. In this review, we first outline approaches that estimate climate sensitivity using instrumental climate observations and then summarize attempts to use the record of climate change on geological timescales. We examine the limitations of these studies and suggest ways in which the power of the palaeoclimate record could be better used to reduce uncertainties in our predictions of climate sensitivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An alternative procedure to that of Lo is proposed for assessing whether there is significant evidence of persistence in time series. The technique estimates the Hurst exponent itself, and significance testing is based on an application of bootstrapping using surrogate data. The method is applied to a set of 10 daily pound exchange rates. A general lack of long-term memory is found to characterize all the series tested, in sympathy with the findings of a number of other recent papers which have used Lo's techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article tests whether workers are indifferent between risky and safe jobs provided that, in labour market equilibrium, wages should serve as a utility equalizing device. Workers’ preferences are elicited through a partial measure of overall job satisfaction: satisfaction with job-related risk. Given that selectivity turns out to be important, we use selectivity corrected models. Results show that wage differentials do not exclusively compensate workers for being in dangerous jobs. However, as job characteristics are substitutable in workers’ utility, they could feel satisfied, even if they were not fully compensated financially for working in dangerous jobs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim To develop a brief, parent-completed instrument (‘ERIC’) for detection of cognitive delay in 10-24 month-olds born preterm, or with low birth weight, or with perinatal complications, and to establish its diagnostic properties. Method Scores were collected from parents of 317 children meeting ≥1 inclusion criteria (birth weight <1500g; gestational age <34 completed weeks; 5-minute Apgar <7; presence of hypoxic-ischemic encephalopathy) and meeting no exclusion criteria. Children were assessed for cognitive delay using a criterion score on the Bayley Scales of Infant and Toddler Development Cognitive Scale III1 <80. Items were retained according to their individual associations with delay. Sensitivity, specificity, Positive and Negative Predictive Values were estimated and a truncated ERIC was developed for use <14 months. Results ERIC detected 17 out of 18 delayed children in the sample, with 94.4% sensitivity (95% CI [confidence interval] 83.9-100%), 76.9% specificity (72.1-81.7%), 19.8% positive predictive value (11.4-28.2%); 99.6% negative predictive value (98.7-100%); 4.09 likelihood ratio positive; and 0.07 likelihood ratio negative; the associated Area under the Curve was .909 (.829-.960). Interpretation ERIC has potential value as a quickly-administered diagnostic instrument for the absence of early cognitive delay in preterm or premature infants of 10-24 months, and as a screen for cognitive delay. Further research may be needed before ERIC can be recommended for wide-scale use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent gravity missions have produced a dramatic improvement in our ability to measure the ocean’s mean dynamic topography (MDT) from space. To fully exploit this oceanic observation, however, we must quantify its error. To establish a baseline, we first assess the error budget for an MDT calculated using a 3rd generation GOCE geoid and the CLS01 mean sea surface (MSS). With these products, we can resolve MDT spatial scales down to 250 km with an accuracy of 1.7 cm, with the MSS and geoid making similar contributions to the total error. For spatial scales within the range 133–250 km the error is 3.0 cm, with the geoid making the greatest contribution. For the smallest resolvable spatial scales (80–133 km) the total error is 16.4 cm, with geoid error accounting for almost all of this. Relative to this baseline, the most recent versions of the geoid and MSS fields reduce the long and short-wavelength errors by 0.9 and 3.2 cm, respectively, but they have little impact in the medium-wavelength band. The newer MSS is responsible for most of the long-wavelength improvement, while for the short-wavelength component it is the geoid. We find that while the formal geoid errors have reasonable global mean values they fail capture the regional variations in error magnitude, which depend on the steepness of the sea floor topography.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate the equilibrium properties of magnetic dipolar (ferro-) fluids and discuss finite-size effects originating from the use of different boundary conditions in computer simulations. Both periodic boundary conditions and a finite spherical box are studied. We demonstrate that periodic boundary conditions and subsequent use of Ewald sum to account for the long-range dipolar interactions lead to a much faster convergence (in terms of the number of investigated dipolar particles) of the magnetization curve and the initial susceptibility to their thermodynamic limits. Another unwanted effect of the simulations in a finite spherical box geometry is a considerable sensitivity to the container size. We further investigate the influence of the surface term in the Ewald sum-that is, due to the surrounding continuum with magnetic permeability mu(BC)-on the convergence properties of our observables and on the final results. The two different ways of evaluating the initial susceptibility, i.e., (1) by the magnetization response of the system to an applied field and (2) by the zero-field fluctuation of the mean-square dipole moment of the system, are compared in terms of speed and accuracy.