971 resultados para Pareto analyysi


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A statistical methodology is proposed and tested for the analysis of extreme values of atmospheric wave activity at mid-latitudes. The adopted methods are the classical block-maximum and peak over threshold, respectively based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD). Time-series of the ‘Wave Activity Index’ (WAI) and the ‘Baroclinic Activity Index’ (BAI) are computed from simulations of the General Circulation Model ECHAM4.6, which is run under perpetual January conditions. Both the GEV and the GPD analyses indicate that the extremes ofWAI and BAI areWeibull distributed, this corresponds to distributions with an upper bound. However, a remarkably large variability is found in the tails of such distributions; distinct simulations carried out under the same experimental setup provide sensibly different estimates of the 200-yr WAI return level. The consequences of this phenomenon in applications of the methodology to climate change studies are discussed. The atmospheric configurations characteristic of the maxima and minima of WAI and BAI are also examined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let 0 denote the level of quality inherent in a food product that is delivered to some terminal market. In this paper, I characterize allocations over 0 and provide an economic rationale for regulating safety and quality standards in the food system. Zusman and Bockstael investigate the theoretical foundations for imposing standards and stress the importance of providing a tractable conceptual foundation. Despite a wealth of contributions that are mainly empirical (for reviews of these works see, respectively, Caswell and Antle), there have been relatively few attempts to model formally the linkages between farm and food markets when food quality and consumer safety are at issue. Here, I attempt to provide such a framework, building on key contributions in the theoretical literature and linking them in a simple model of quality determination in a vertically related marketing channel. The food-marketing model is due to Gardner. Spence provides a foundation for Pareto-improving intervention in a deterministic model of quality provision, and Leland, building on the classic paper by Akerlof, investigates licensing and minimum standards when the information structure is incomplete. Linking these ideas in a satisfactory model of the food markets is the main objective of the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present projections of winter storm-induced insured losses in the German residential building sector for the 21st century. With this aim, two structurally most independent downscaling methods and one hybrid downscaling method are applied to a 3-member ensemble of ECHAM5/MPI-OM1 A1B scenario simulations. One method uses dynamical downscaling of intense winter storm events in the global model, and a transfer function to relate regional wind speeds to losses. The second method is based on a reshuffling of present day weather situations and sequences taking into account the change of their frequencies according to the linear temperature trends of the global runs. The third method uses statistical-dynamical downscaling, considering frequency changes of the occurrence of storm-prone weather patterns, and translation into loss by using empirical statistical distributions. The A1B scenario ensemble was downscaled by all three methods until 2070, and by the (statistical-) dynamical methods until 2100. Furthermore, all methods assume a constant statistical relationship between meteorology and insured losses and no developments other than climate change, such as in constructions or claims management. The study utilizes data provided by the German Insurance Association encompassing 24 years and with district-scale resolution. Compared to 1971–2000, the downscaling methods indicate an increase of 10-year return values (i.e. loss ratios per return period) of 6–35 % for 2011–2040, of 20–30 % for 2041–2070, and of 40–55 % for 2071–2100, respectively. Convolving various sources of uncertainty in one confidence statement (data-, loss model-, storm realization-, and Pareto fit-uncertainty), the return-level confidence interval for a return period of 15 years expands by more than a factor of two. Finally, we suggest how practitioners can deal with alternative scenarios or possible natural excursions of observed losses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the cho- sen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan–Yorke dimension of the attractor. Preliminary numer- ical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a bargaining process supergame over the strategies to play in a non-cooperative game. The agreement reached by players at the end of the bargaining process is the strategy profile that they will play in the original non-cooperative game. We analyze the subgame perfect equilibria of this supergame, and its implications on the original game. We discuss existence, uniqueness, and efficiency of the agreement reachable through this bargaining process. We illustrate the consequences of applying such a process to several common two-player non-cooperative games: the Prisoner’s Dilemma, the Hawk-Dove Game, the Trust Game, and the Ultimatum Game. In each of them, the proposed bargaining process gives rise to Pareto-efficient agreements that are typically different from the Nash equilibrium of the original games.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering is a difficult task: there is no single cluster definition and the data can have more than one underlying structure. Pareto-based multi-objective genetic algorithms (e.g., MOCK Multi-Objective Clustering with automatic K-determination and MOCLE-Multi-Objective Clustering Ensemble) were proposed to tackle these problems. However, the output of such algorithms can often contains a high number of partitions, becoming difficult for an expert to manually analyze all of them. In order to deal with this problem, we present two selection strategies, which are based on the corrected Rand, to choose a subset of solutions. To test them, they are applied to the set of solutions produced by MOCK and MOCLE in the context of several datasets. The study was also extended to select a reduced set of partitions from the initial population of MOCLE. These analysis show that both versions of selection strategy proposed are very effective. They can significantly reduce the number of solutions and, at the same time, keep the quality and the diversity of the partitions in the original set of solutions. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present an algorithm for cluster analysis that integrates aspects from cluster ensemble and multi-objective clustering. The algorithm is based on a Pareto-based multi-objective genetic algorithm, with a special crossover operator, which uses clustering validation measures as objective functions. The algorithm proposed can deal with data sets presenting different types of clusters, without the need of expertise in cluster analysis. its result is a concise set of partitions representing alternative trade-offs among the objective functions. We compare the results obtained with our algorithm, in the context of gene expression data sets, to those achieved with multi-objective Clustering with automatic K-determination (MOCK). the algorithm most closely related to ours. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, a simple relation between the Leimkuhler curve and the mean residual life is established. The result is illustrated with several models commonly used in informetrics, such as exponential, Pareto and lognormal. Finally, relationships with some other reliability concepts are also presented. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis work concerns about the Performance evolution of peer to peer networks, where we used different distribution technique’s of peer distribution like Weibull, Lognormal and Pareto distribution process. Then we used a network simulator to evaluate the performance of these three distribution techniques.During the last decade the Internet has expanded into a world-wide network connecting millions of hosts and users and providing services for everyone. Many emerging applications are bandwidth-intensive in their nature; the size of downloaded files including music and videos can be huge, from ten megabits to many gigabits. The efficient use of network resources is thus crucial for the survivability of the Internet. Traffic engineering (TE) covers a range of mechanisms for optimizing operational networks from the traffic perspective. The time scale in traffic engineering varies from the short-term network control to network planning over a longer time period.Here in this thesis work we considered the peer distribution technique in-order to minimise the peer arrival and service process with three different techniques, where we calculated the congestion parameters like blocking time for each peer before entering into the service process, waiting time for a peers while the other peer has been served in the service block and the delay time for each peer. Then calculated the average of each process and graphs have been plotted using Matlab to analyse the results