955 resultados para Conditional entropy
Resumo:
This letter argues that the current controversy about whether Wbuoyancy, the power input due to the surface buoyancy fluxes, is large or small in the oceans stems from two distinct and incompatible views on how Wbuoyancy relates to the volume-integrated work of expansion/contraction B. The current prevailing view is that Wbuoyancy should be identified with the net value of B, which current theories estimate to be small. The alternative view, defended here, is that only the positive part of B, i.e., the one converting internal energy into mechanical energy, should enter the definition of Wbuoyancy, since the negative part of B is associated with the non-viscous dissipation of mechanical energy. Two indirect methods suggest that by contrast, the positive part of B is potentially large.
Resumo:
In financial decision-making processes, the adopted weights of the objective functions have significant impacts on the final decision outcome. However, conventional rating and weighting methods exhibit difficulty in deriving appropriate weights for complex decision-making problems with imprecise information. Entropy is a quantitative measure of uncertainty and has been useful in exploring weights of attributes in decision making. A fuzzy and entropy-based mathematical approach is employed to solve the weighting problem of the objective functions in an overall cash-flow model. The multiproject being undertaken by a medium-size construction firm in Hong Kong was used as a real case study to demonstrate the application of entropy. Its application in multiproject cash flow situations is demonstrated. The results indicate that the overall before-tax profit was HK$ 0.11 millions lower after the introduction of appropriate weights. In addition, the best time to invest in new projects arising from positive cash flow was identified to be two working months earlier than the nonweight system.
Resumo:
Genetic algorithms (GAs) have been introduced into site layout planning as reported in a number of studies. In these studies, the objective functions were defined so as to employ the GAs in searching for the optimal site layout. However, few studies have been carried out to investigate the actual closeness of relationships between site facilities; it is these relationships that ultimately govern the site layout. This study has determined that the underlying factors of site layout planning for medium-size projects include work flow, personnel flow, safety and environment, and personal preferences. By finding the weightings on these factors and the corresponding closeness indices between each facility, a closeness relationship has been deduced. Two contemporary mathematical approaches - fuzzy logic theory and an entropy measure - were adopted in finding these results in order to minimize the uncertainty and vagueness of the collected data and improve the quality of the information. GAs were then applied to searching for the optimal site layout in a medium-size government project using the GeneHunter software. The objective function involved minimizing the total travel distance. An optimal layout was obtained within a short time. This reveals that the application of GA to site layout planning is highly promising and efficient.
Resumo:
This article proposes a new model for autoregressive conditional heteroscedasticity and kurtosis. Via a time-varying degrees of freedom parameter, the conditional variance and conditional kurtosis are permitted to evolve separately. The model uses only the standard Student’s t-density and consequently can be estimated simply using maximum likelihood. The method is applied to a set of four daily financial asset return series comprising U.S. and U.K. stocks and bonds, and significant evidence in favor of the presence of autoregressive conditional kurtosis is observed. Various extensions to the basic model are proposed, and we show that the response of kurtosis to good and bad news is not significantly asymmetric.
Resumo:
Sting jets are transient mesoscale jets of air that descend from the tip of the cloud head towards the top of the boundary layer in severe extratropical cyclones and can lead to damaging surface wind gusts. This recently identified jet is distinct from the well-documented jets associated with the cold and warm conveyor belts. One mechanism proposed for their development is the release of conditional symmetric instability (CSI). Here the spatial distribution and temporal evolution of several CSI diagnostics in four severe storms are analysed. A sting jet has been identified in three of these storms; for comparison, we also analysed one storm that did not have a sting jet, even though it hadmany of the apparent features of sting-jet storms. The sting-jet storms are distinct from the non-sting-jet storms by having much greater andmore extensive conditional instability (CI) and CSI. CSI is released by ascending air parcels in the cloud head in two of the sting-jet storms and by descending air parcels in the other sting-jet storm. By contrast, only weak CI to ascending air parcels is present at the cloud-head tip in the non-sting-jet storm. CSI released by descending air parcels, as diagnosed by decaying downdraught slantwise convective available potential energy (DSCAPE), is collocated with the sting jets in all three sting-jet storms and has a localisedmaximum in two of them. Consistent evolutions of saturated moist potential vorticity are found.We conclude that CSI release has a role in the generation of the sting jet, that the sting jet may be driven by the release of instability to both ascending and descending parcels, and that DSCAPE could be used as a discriminating diagnostic for the sting jet based on these four case-studies.
Resumo:
We consider the finite sample properties of model selection by information criteria in conditionally heteroscedastic models. Recent theoretical results show that certain popular criteria are consistent in that they will select the true model asymptotically with probability 1. To examine the empirical relevance of this property, Monte Carlo simulations are conducted for a set of non–nested data generating processes (DGPs) with the set of candidate models consisting of all types of model used as DGPs. In addition, not only is the best model considered but also those with similar values of the information criterion, called close competitors, thus forming a portfolio of eligible models. To supplement the simulations, the criteria are applied to a set of economic and financial series. In the simulations, the criteria are largely ineffective at identifying the correct model, either as best or a close competitor, the parsimonious GARCH(1, 1) model being preferred for most DGPs. In contrast, asymmetric models are generally selected to represent actual data. This leads to the conjecture that the properties of parameterizations of processes commonly used to model heteroscedastic data are more similar than may be imagined and that more attention needs to be paid to the behaviour of the standardized disturbances of such models, both in simulation exercises and in empirical modelling.
Resumo:
The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP) in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m−2 K−1 of material entropy production is due to vertical heat transport and 5–7 mW m−2 K−1 to horizontal heat transport
Resumo:
We present an outlook on the climate system thermodynamics. First, we construct an equivalent Carnot engine with efficiency and frame the Lorenz energy cycle in a macroscale thermodynamic context. Then, by exploiting the second law, we prove that the lower bound to the entropy production is times the integrated absolute value of the internal entropy fluctuations. An exergetic interpretation is also proposed. Finally, the controversial maximum entropy production principle is reinterpreted as requiring the joint optimization of heat transport and mechanical work production. These results provide tools for climate change analysis and for climate models’ validation.
Resumo:
The evaluation of investment fund performance has been one of the main developments of modern portfolio theory. Most studies employ the technique developed by Jensen (1968) that compares a particular fund's returns to a benchmark portfolio of equal risk. However, the standard measures of fund manager performance are known to suffer from a number of problems in practice. In particular previous studies implicitly assume that the risk level of the portfolio is stationary through the evaluation period. That is unconditional measures of performance do not account for the fact that risk and expected returns may vary with the state of the economy. Therefore many of the problems encountered in previous performance studies reflect the inability of traditional measures to handle the dynamic behaviour of returns. As a consequence Ferson and Schadt (1996) suggest an approach to performance evaluation called conditional performance evaluation which is designed to address this problem. This paper utilises such a conditional measure of performance on a sample of 27 UK property funds, over the period 1987-1998. The results of which suggest that once the time varying nature of the funds beta is corrected for, by the addition of the market indicators, the average fund performance show an improvement over that of the traditional methods of analysis.
Resumo:
In a recent paper, Mason et al. propose a reliability test of ensemble forecasts for a continuous, scalar verification. As noted in the paper, the test relies on a very specific interpretation of ensembles, namely, that the ensemble members represent quantiles of some underlying distribution. This quantile interpretation is not the only interpretation of ensembles, another popular one being the Monte Carlo interpretation. Mason et al. suggest estimating the quantiles in this situation; however, this approach is fundamentally flawed. Errors in the quantile estimates are not independent of the exceedance events, and consequently the conditional exceedance probabilities (CEP) curves are not constant, which is a fundamental assumption of the test. The test would reject reliable forecasts with probability much higher than the test size.
Resumo:
BACKGROUND: Fibroblast growth factor 9 (FGF9) is secreted from bone marrow cells, which have been shown to improve systolic function after myocardial infarction (MI) in a clinical trial. FGF9 promotes cardiac vascularization during embryonic development but is only weakly expressed in the adult heart. METHODS AND RESULTS: We used a tetracycline-responsive binary transgene system based on the α-myosin heavy chain promoter to test whether conditional expression of FGF9 in the adult myocardium supports adaptation after MI. In sham-operated mice, transgenic FGF9 stimulated left ventricular hypertrophy with microvessel expansion and preserved systolic and diastolic function. After coronary artery ligation, transgenic FGF9 enhanced hypertrophy of the noninfarcted left ventricular myocardium with increased microvessel density, reduced interstitial fibrosis, attenuated fetal gene expression, and improved systolic function. Heart failure mortality after MI was markedly reduced by transgenic FGF9, whereas rupture rates were not affected. Adenoviral FGF9 gene transfer after MI similarly promoted left ventricular hypertrophy with improved systolic function and reduced heart failure mortality. Mechanistically, FGF9 stimulated proliferation and network formation of endothelial cells but induced no direct hypertrophic effects in neonatal or adult rat cardiomyocytes in vitro. FGF9-stimulated endothelial cell supernatants, however, induced cardiomyocyte hypertrophy via paracrine release of bone morphogenetic protein 6. In accord with this observation, expression of bone morphogenetic protein 6 and phosphorylation of its downstream targets SMAD1/5 were increased in the myocardium of FGF9 transgenic mice. CONCLUSIONS: Conditional expression of FGF9 promotes myocardial vascularization and hypertrophy with enhanced systolic function and reduced heart failure mortality after MI. These observations suggest a previously unrecognized therapeutic potential for FGF9 after MI.
Resumo:
We compare a number of models of post War US output growth in terms of the degree and pattern of non-linearity they impart to the conditional mean, where we condition on either the previous period's growth rate, or the previous two periods' growth rates. The conditional means are estimated non-parametrically using a nearest-neighbour technique on data simulated from the models. In this way, we condense the complex, dynamic, responses that may be present in to graphical displays of the implied conditional mean.