970 resultados para variance ration method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

An existing hybrid finite element (FE)/statistical energy analysis (SEA) approach to the analysis of the mid- and high frequency vibrations of a complex built-up system is extended here to a wider class of uncertainty modeling. In the original approach, the constituent parts of the system are considered to be either deterministic, and modeled using FE, or highly random, and modeled using SEA. A non-parametric model of randomness is employed in the SEA components, based on diffuse wave theory and the Gaussian Orthogonal Ensemble (GOE), and this enables the mean and variance of second order quantities such as vibrational energy and response cross-spectra to be predicted. In the present work the assumption that the FE components are deterministic is relaxed by the introduction of a parametric model of uncertainty in these components. The parametric uncertainty may be modeled either probabilistically, or by using a non-probabilistic approach such as interval analysis, and it is shown how these descriptions can be combined with the non-parametric uncertainty in the SEA subsystems to yield an overall assessment of the performance of the system. The method is illustrated by application to an example built-up plate system which has random properties, and benchmark comparisons are made with full Monte Carlo simulations. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2014, Springer-Verlag Berlin Heidelberg.The frequency and severity of extreme events are tightly associated with the variance of precipitation. As climate warms, the acceleration in hydrological cycle is likely to enhance the variance of precipitation across the globe. However, due to the lack of an effective analysis method, the mechanisms responsible for the changes of precipitation variance are poorly understood, especially on regional scales. Our study fills this gap by formulating a variance partition algorithm, which explicitly quantifies the contributions of atmospheric thermodynamics (specific humidity) and dynamics (wind) to the changes in regional-scale precipitation variance. Taking Southeastern (SE) United States (US) summer precipitation as an example, the algorithm is applied to the simulations of current and future climate by phase 5 of Coupled Model Intercomparison Project (CMIP5) models. The analysis suggests that compared to observations, most CMIP5 models (~60 %) tend to underestimate the summer precipitation variance over the SE US during the 1950–1999, primarily due to the errors in the modeled dynamic processes (i.e. large-scale circulation). Among the 18 CMIP5 models analyzed in this study, six of them reasonably simulate SE US summer precipitation variance in the twentieth century and the underlying physical processes; these models are thus applied for mechanistic study of future changes in SE US summer precipitation variance. In the future, the six models collectively project an intensification of SE US summer precipitation variance, resulting from the combined effects of atmospheric thermodynamics and dynamics. Between them, the latter plays a more important role. Specifically, thermodynamics results in more frequent and intensified wet summers, but does not contribute to the projected increase in the frequency and intensity of dry summers. In contrast, atmospheric dynamics explains the projected enhancement in both wet and dry summers, indicating its importance in understanding future climate change over the SE US. The results suggest that the intensified SE US summer precipitation variance is not a purely thermodynamic response to greenhouse gases forcing, and cannot be explained without the contribution of atmospheric dynamics. Our analysis provides important insights to understand the mechanisms of SE US summer precipitation variance change. The algorithm formulated in this study can be easily applied to other regions and seasons to systematically explore the mechanisms responsible for the changes in precipitation extremes in a warming climate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of four process factors: pH, emulsifier (gelatin) concentration, mixing and batch, on the % w/w entrapment of propranolol hydrochloride in ethylcellulose microcapsules prepared by the solvent evaporation process were examined using a factorial design. In this design the minimum % w/w entrapments of propranolol hydrochloride were observed whenever the external aqueous phase contained 1.5% w/v gelatin at pH 6.0 (0.71-0.91% w/w) whereas maximum entrapments occurred whenever the external aqueous phase was composed of 0.5% w/v gelatin at pH 9.0,(8.9-9.1% w/w). The theoretical maximum loading was 50% w/w. Statistical evaluation of the results by analysis of variance showed that emulsifer (gelatin) concentration and pH, but not mixing and batch significantly affected entrapment. An interaction between pH and gelatin concentration was observed in the factorial design which was accredited to the greater effect of gelatin concentration on % w/w entrapment at pH 9.0 than at pH 6.0. Maximum theoretical entrapment was achieved by increasing the pH of the external phase to 12.0. Marked increases in drug entrapment were observed whenever the pH of the external phase exceeded the pK(2) of propranolol hydrochloride. It was concluded that pH, and hence ionisation, was the greatest determinant of entrapment of propranolol hydrochloride into microcapsules prepared by the solvent evaporation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We developed the concept of split-'t to deal with the large molecules (in terms of the number of electrons and nuclear charge Z). This naturally leads to partitioning the local energy into components due to each electron shell. The minimization of the variation of the valence shell local energy is used to optimize a simple two parameter CuH wave function. Molecular properties (spectroscopic constants and the dipole moment) are calculated for the optimized and nearly optimized wave functions using the Variational Quantum Monte Carlo method. Our best results are comparable to those from the single and double configuration interaction (SDCI) method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les modèles de séries chronologiques avec variances conditionnellement hétéroscédastiques sont devenus quasi incontournables afin de modéliser les séries chronologiques dans le contexte des données financières. Dans beaucoup d'applications, vérifier l'existence d'une relation entre deux séries chronologiques représente un enjeu important. Dans ce mémoire, nous généralisons dans plusieurs directions et dans un cadre multivarié, la procédure dévéloppée par Cheung et Ng (1996) conçue pour examiner la causalité en variance dans le cas de deux séries univariées. Reposant sur le travail de El Himdi et Roy (1997) et Duchesne (2004), nous proposons un test basé sur les matrices de corrélation croisée des résidus standardisés carrés et des produits croisés de ces résidus. Sous l'hypothèse nulle de l'absence de causalité en variance, nous établissons que les statistiques de test convergent en distribution vers des variables aléatoires khi-carrées. Dans une deuxième approche, nous définissons comme dans Ling et Li (1997) une transformation des résidus pour chaque série résiduelle vectorielle. Les statistiques de test sont construites à partir des corrélations croisées de ces résidus transformés. Dans les deux approches, des statistiques de test pour les délais individuels sont proposées ainsi que des tests de type portemanteau. Cette méthodologie est également utilisée pour déterminer la direction de la causalité en variance. Les résultats de simulation montrent que les tests proposés offrent des propriétés empiriques satisfaisantes. Une application avec des données réelles est également présentée afin d'illustrer les méthodes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wavenumber-frequency spectral analysis and linear wave theory are combined in a novel method to quantitatively estimate equatorial wave activity in the tropical lower stratosphere. The method requires temperature and velocity observations that are regularly spaced in latitude, longitude and time; it is therefore applied to the ECMWF 15-year re-analysis dataset (ERA-15). Signals consistent with idealized Kelvin and Rossby-gravity waves are found at wavenumbers and frequencies in agreement with previous studies. When averaged over 1981-93, the Kelvin wave explains approximately 1 K-2 of temperature variance on the equator at 100 hPa, while the Rossby-gravity wave explains approximately 1 m(2)s(-2) of meridional wind variance. Some inertio-gravity wave and equatorial Rossby wave signals are also found; however the resolution of ERA-15 is not sufficient for the method to provide an accurate climatology of waves with high meridional structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Four multiparous cows with cannulas in the rumen and proximal duodenum were used in early lactation in a 4 x 4 Latin square experiment to investigate the effect of method of application of a fibrolytic enzyme product on digestive processes and milk production. The cows were given ad libitum a total mixed ration (TMR) composed of 57% (dry matter basis) forage (3:1 corn silage:grass silage) and 43% concentrates. The TMR contained (g/kg dry matter): 274 neutral detergent fiber, 295 starch, 180 crude protein. Treatments were TMR alone or TMR with the enzyme product added (2 kg/1000 kg TMR dry matter) either sprayed on the TMR 1 h before the morning feed (TMR-E), sprayed only on the concentrate the day before feeding (Concs-E), or infused into the rumen for 14 h/d (Rumen-E). There Was no significant effect on either feed intake or milk yield but both were highest on TMR-E. Rumen digestibility of dry matter, organic matter, and starch was unaffected by the enzyme. Digestibility of NDF was lowest on TMR-E in the rumen but highest postruminally. Total Tract digestibility was highest on TMR-E for dry matter, organic matter, and starch but treatment differences were nonsignificant for neutral detergent fiber: Corn silage stover retention time in the rumen was reduced by all enzyme treatments but postruminal transit time vas increased so the decline in total tract retention. time with enzymes was not significant. It is suggested that the tendency for enzymes to reduce particle retention time in the rumen may, by reducing the time available for fibrolysis to occur, at least partly explain the variability in the reported responses to enzyme treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs.We propose a jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is common practice to design a survey with a large number of strata. However, in this case the usual techniques for variance estimation can be inaccurate. This paper proposes a variance estimator for estimators of totals. The method proposed can be implemented with standard statistical packages without any specific programming, as it involves simple techniques of estimation, such as regression fitting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the Hájek (Ann. Math Statist. (1964) 1491) variance estimator can be used to estimate the variance of the Horvitz–Thompson estimator when the Chao sampling scheme (Chao, Biometrika 69 (1982) 653) is implemented. This estimator is simple and can be implemented with any statistical packages. We consider a numerical and an analytic method to show that this estimator can be used. A series of simulations supports our findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new technique for objective classification of boundary layers is applied to ground-based vertically pointing Doppler lidar and sonic anemometer data. The observed boundary layer has been classified into nine different types based on those in the Met Office ‘Lock’ scheme, using vertical velocity variance and skewness, along with attenuated backscatter coefficient and surface sensible heat flux. This new probabilistic method has been applied to three years of data from Chilbolton Observatory in southern England and a climatology of boundary-layer type has been created. A clear diurnal cycle is present in all seasons. The most common boundary-layer type is stable with no cloud (30.0% of the dataset). The most common unstable type is well mixed with no cloud (15.4%). Decoupled stratocumulus is the third most common boundary-layer type (10.3%) and cumulus under stratocumulus occurs 1.0% of the time. The occurrence of stable boundary-layer types is much higher in the winter than the summer and boundary-layer types capped with cumulus cloud are more prevalent in the warm seasons. The most common diurnal evolution of boundary-layer types, occurring on 52 days of our three-year dataset, is that of no cloud with the stability changing from stable to unstable during daylight hours. These results are based on 16393 hours, 62.4% of the three-year dataset, of diagnosed boundary-layer type. This new method is ideally suited to long-term evaluation of boundary-layer type parametrisations in weather forecast and climate models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverse methods are widely used in various fields of atmospheric science. However, such methods are not commonly used within the boundary-layer community, where robust observations of surface fluxes are a particular concern. We present a new technique for deriving surface sensible heat fluxes from boundary-layer turbulence observations using an inverse method. Doppler lidar observations of vertical velocity variance are combined with two well-known mixed-layer scaling forward models for a convective boundary layer (CBL). The inverse method is validated using large-eddy simulations of a CBL with increasing wind speed. The majority of the estimated heat fluxes agree within error with the proscribed heat flux, across all wind speeds tested. The method is then applied to Doppler lidar data from the Chilbolton Observatory, UK. Heat fluxes are compared with those from a mast-mounted sonic anemometer. Errors in estimated heat fluxes are on average 18 %, an improvement on previous techniques. However, a significant negative bias is observed (on average −63%) that is more pronounced in the morning. Results are improved for the fully-developed CBL later in the day, which suggests that the bias is largely related to the choice of forward model, which is kept deliberately simple for this study. Overall, the inverse method provided reasonable flux estimates for the simple case of a CBL. Results shown here demonstrate that this method has promise in utilizing ground-based remote sensing to derive surface fluxes. Extension of the method is relatively straight-forward, and could include more complex forward models, or other measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A truly variance-minimizing filter is introduced and its per for mance is demonstrated with the Korteweg– DeV ries (KdV) equation and with a multilayer quasigeostrophic model of the ocean area around South Africa. It is recalled that Kalman-like filters are not variance minimizing for nonlinear model dynamics and that four - dimensional variational data assimilation (4DV AR)-like methods relying on per fect model dynamics have dif- ficulty with providing error estimates. The new method does not have these drawbacks. In fact, it combines advantages from both methods in that it does provide error estimates while automatically having balanced states after analysis, without extra computations. It is based on ensemble or Monte Carlo integrations to simulate the probability density of the model evolution. When obser vations are available, the so-called importance resampling algorithm is applied. From Bayes’ s theorem it follows that each ensemble member receives a new weight dependent on its ‘ ‘distance’ ’ t o the obser vations. Because the weights are strongly var ying, a resampling of the ensemble is necessar y. This resampling is done such that members with high weights are duplicated according to their weights, while low-weight members are largely ignored. In passing, it is noted that data assimilation is not an inverse problem by nature, although it can be for mulated that way . Also, it is shown that the posterior variance can be larger than the prior if the usual Gaussian framework is set aside. However , i n the examples presented here, the entropy of the probability densities is decreasing. The application to the ocean area around South Africa, gover ned by strongly nonlinear dynamics, shows that the method is working satisfactorily . The strong and weak points of the method are discussed and possible improvements are proposed.