49 resultados para Chebyshev And Binomial Distributions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Boreal winter wind storm situations over Central Europe are investigated by means of an objective cluster analysis. Surface data from the NCEP-Reanalysis and ECHAM4/OPYC3-climate change GHG simulation (IS92a) are considered. To achieve an optimum separation of clusters of extreme storm conditions, 55 clusters of weather patterns are differentiated. To reduce the computational effort, a PCA is initially performed, leading to a data reduction of about 98 %. The clustering itself was computed on 3-day periods constructed with the first six PCs using "k-means" clustering algorithm. The applied method enables an evaluation of the time evolution of the synoptic developments. The climate change signal is constructed by a projection of the GCM simulation on the EOFs attained from the NCEP-Reanalysis. Consequently, the same clusters are obtained and frequency distributions can be compared. For Central Europe, four primary storm clusters are identified. These clusters feature almost 72 % of the historical extreme storms events and add only to 5 % of the total relative frequency. Moreover, they show a statistically significant signature in the associated wind fields over Europe. An increased frequency of Central European storm clusters is detected with enhanced GHG conditions, associated with an enhancement of the pressure gradient over Central Europe. Consequently, more intense wind events over Central Europe are expected. The presented algorithm will be highly valuable for the analysis of huge data amounts as is required for e.g. multi-model ensemble analysis, particularly because of the enormous data reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last decades, several windstorm series hit Europe leading to large aggregated losses. Such storm series are examples of serial clustering of extreme cyclones, presenting a considerable risk for the insurance industry. Clustering of events and return periods of storm series for Germany are quantified based on potential losses using empirical models. Two reanalysis data sets and observations from German weather stations are considered for 30 winters. Histograms of events exceeding selected return levels (1-, 2- and 5-year) are derived. Return periods of historical storm series are estimated based on the Poisson and the negative binomial distributions. Over 4000 years of general circulation model (GCM) simulations forced with current climate conditions are analysed to provide a better assessment of historical return periods. Estimations differ between distributions, for example 40 to 65 years for the 1990 series. For such less frequent series, estimates obtained with the Poisson distribution clearly deviate from empirical data. The negative binomial distribution provides better estimates, even though a sensitivity to return level and data set is identified. The consideration of GCM data permits a strong reduction of uncertainties. The present results support the importance of considering explicitly clustering of losses for an adequate risk assessment for economical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatially dense observations of gust speeds are necessary for various applications, but their availability is limited in space and time. This work presents an approach to help to overcome this problem. The main objective is the generation of synthetic wind gust velocities. With this aim, theoretical wind and gust distributions are estimated from 10 yr of hourly observations collected at 123 synoptic weather stations provided by the German Weather Service. As pre-processing, an exposure correction is applied on measurements of the mean wind velocity to reduce the influence of local urban and topographic effects. The wind gust model is built as a transfer function between distribution parameters of wind and gust velocities. The aim of this procedure is to estimate the parameters of gusts at stations where only wind speed data is available. These parameters can be used to generate synthetic gusts, which can improve the accuracy of return periods at test sites with a lack of observations. The second objective is to determine return periods much longer than the nominal length of the original time series by considering extreme value statistics. Estimates for both local maximum return periods and average return periods for single historical events are provided. The comparison of maximum and average return periods shows that even storms with short average return periods may lead to local wind gusts with return periods of several decades. Despite uncertainties caused by the short length of the observational records, the method leads to consistent results, enabling a wide range of possible applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Iso-score curves graph (iSCG) and mathematical relationships between Scoring Parameters (SP) and Forecasting Parameters (FP) can be used in Economic Scoring Formulas (ESF) used in tendering to distribute the score among bidders in the economic part of a proposal. Each contracting authority must set an ESF when publishing tender specifications and the strategy of each bidder will differ depending on the ESF selected and the weight of the overall proposal scoring. The various mathematical relationships and density distributions that describe the main SPs and FPs, and the representation of tendering data by means of iSCGs, enable the generation of two new types of graphs that can be very useful for bidders who want to be more competitive: the scoring and position probability graphs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assimilation of physical variables into coupled physical/biogeochemical models poses considerable difficulties. One problem is that data assimilation can break relationships between physical and biological variables. As a consequence, biological tracers, especially nutrients, are incorrectly displaced in the vertical, resulting in unrealistic biogeochemical fields. To prevent this, we present the idea of applying an increment to the nutrient field within a data assimilating model to ensure that nutrient-potential density relationships are maintained within a water column during assimilation. After correcting the nutrients, it is assumed that other biological variables rapidly adjust to the corrected nutrient fields. We applied this method to a 17 year run of the 2° NEMO ocean-ice model coupled to the PlankTOM5 ecosystem model. Results were compared with a control with no assimilation, and with a model with physical assimilation but no nutrient increment. In the nutrient incrementing experiment, phosphate distributions were improved both at high latitudes and at the equator. At midlatitudes, assimilation generated unrealistic advective upwelling of nutrients within the boundary currents, which spread into the subtropical gyres resulting in more biased nutrient fields. This result was largely unaffected by the nutrient increment and is probably due to boundary currents being poorly resolved in a 2° model. Changes to nutrient distributions fed through into other biological parameters altering primary production, air-sea CO2 flux, and chlorophyll distributions. These secondary changes were most pronounced in the subtropical gyres and at the equator, which are more nutrient limited than high latitudes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spatial distribution of CO2 level in a classroom carried out in previous field work research has demonstrated that there is some evidence of variations in CO2 concentration in a classroom space. Significant fluctuations in CO2 concentration were found at different sampling points depending on the ventilation strategies and environmental conditions prevailing in individual classrooms. However, how these variations are affected by the emitting sources and the room air movement remains unknown. Hence, it was concluded that detailed investigation of the CO2 distribution need to be performed on a smaller scale. As a result, it was decided to use an environmental chamber with various methods and rates of ventilation, for the same internal temperature and heat loads, to study the effect of ventilation strategy and air movement on the distribution of CO2 concentration in a room. The role of human exhalation and its interaction with the plume induced by the body's convective flow and room air movement due to different ventilation strategies were studied in a chamber at the University of Reading. These phenomena are considered to be important in understanding and predicting the flow patterns in a space and how these impact on the distribution of contaminants. This paper attempts to study the CO2 dispersion and distribution at the exhalation zone of two people sitting in a chamber as well as throughout the occupied zone of the chamber. The horizontal and vertical distributions of CO2 were sampled at locations with a probability that CO2 variation is considered high. Although the room size, source location, ventilation rate and location of air supply and extract devices all can have influence on the CO2 distribution, this article gives general guidelines on the optimum positioning of CO2 sensor in a room.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new numerical modeling of inhaled charge aerosol has been developed based on a modified Weibel's model. Both the velocity profiles (slug and parabolic flows) and the particle distributions (uniform and parabolic distributions) have been considered. Inhaled particles are modeled as a dilute dispersed phase flow in which the particle motion is controlled by fluid force and external forces acting on particles. This numerical study extends the previous numerical studies by considering both space- and image-charge forces. Because of the complex computation of interacting forces due to space-charge effect, the particle-mesh (PM) method is selected to calculate these forces. In the PM technique, the charges of all particles are assigned to the space-charge field mesh, for calculating charge density. The Poisson's equation of the electrostatic potential is then solved, and the electrostatic force acting on individual particle is interpolated. It is assumed that there is no effect of humidity on charged particles. The results show that many significant factors also affect the deposition, such as the volume of particle cloud, the velocity profile and the particle distribution. This study allows a better understanding of electrostatic mechanism of aerosol transport and deposition in human airways.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-factor approaches to analysis of real estate returns have, since the pioneering work of Chan, Hendershott and Sanders (1990), emphasised a macro-variables approach in preference to the latent factor approach that formed the original basis of the arbitrage pricing theory. With increasing use of high frequency data and trading strategies and with a growing emphasis on the risks of extreme events, the macro-variable procedure has some deficiencies. This paper explores a third way, with the use of an alternative to the standard principal components approach – independent components analysis (ICA). ICA seeks higher moment independence and maximises in relation to a chosen risk parameter. We apply an ICA based on kurtosis maximisation to weekly US REIT data using a kurtosis maximising algorithm. The results show that ICA is successful in capturing the kurtosis characteristics of REIT returns, offering possibilities for the development of risk management strategies that are sensitive to extreme events and tail distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tax policies that constrain net transfers between the farm sector and the fisc are modeled under price uncertainty. Increasing the level of tax on profits causes the firm to expand output. Implications are derived for supply control and the distributions of profits and net receipts at the fisc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many applications, such as intermittent data assimilation, lead to a recursive application of Bayesian inference within a Monte Carlo context. Popular data assimilation algorithms include sequential Monte Carlo methods and ensemble Kalman filters (EnKFs). These methods differ in the way Bayesian inference is implemented. Sequential Monte Carlo methods rely on importance sampling combined with a resampling step, while EnKFs utilize a linear transformation of Monte Carlo samples based on the classic Kalman filter. While EnKFs have proven to be quite robust even for small ensemble sizes, they are not consistent since their derivation relies on a linear regression ansatz. In this paper, we propose another transform method, which does not rely on any a priori assumptions on the underlying prior and posterior distributions. The new method is based on solving an optimal transportation problem for discrete random variables. © 2013, Society for Industrial and Applied Mathematics

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop the essential ingredients of a new, continuum and anisotropic model of sea-ice dynamics designed for eventual use in climate simulation. These ingredients are a constitutive law for sea-ice stress, relating stress to the material properties of sea ice and to internal variables describing the sea-ice state, and equations describing the evolution of these variables. The sea-ice cover is treated as a densely flawed two-dimensional continuum consisting of a uniform field of thick ice that is uniformly permeated with narrow linear regions of thinner ice called leads. Lead orientation, thickness and width distributions are described by second-rank tensor internal variables: the structure, thickness and width tensors, whose dynamics are governed by corresponding evolution equations accounting for processes such as new lead generation and rotation as the ice cover deforms. These evolution equations contain contractions of higher-order tensor expressions that require closures. We develop a sea-ice stress constitutive law that relates sea-ice stress to the structure tensor, thickness tensor and strain rate. For the special case of empty leads (containing no ice), linear closures are adopted and we present calculations for simple shear, convergence and divergence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the evolution of prices in markets with Internet price-comparison search engines. The empirical study analyzes laboratory data of prices available to informed consumers, for two industry sizes and two conditions on the sample (complete and incomplete). Distributions are typically bimodal. One of the two modes of distribution, corresponding to monopoly pricing, tends to attract such pricing strategies increasingly over time. The second one, corresponding to interior pricing, follows a decreasing trend. Monopoly pricing can serve as a means of insurance against more competitive (but riskier) behavior. In fact, experimental subjects who initially earn low profits due to interior pricing are more likely to switch to monopoly pricing than subjects who experience good returns from the start.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, we have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. We have related these structures to storm life-cycles derived by tracking features in the rainfall from the UK radar network, and compared them statistically to storm structures in the Met Office model, which we ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. We also evaluated the scale and intensity of convective updrafts using a new radar technique. We find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.