62 resultados para Maximum-entropy probability density
Resumo:
The disadvantage of the majority of data assimilation schemes is the assumption that the conditional probability density function of the state of the system given the observations [posterior probability density function (PDF)] is distributed either locally or globally as a Gaussian. The advantage, however, is that through various different mechanisms they ensure initial conditions that are predominantly in linear balance and therefore spurious gravity wave generation is suppressed. The equivalent-weights particle filter is a data assimilation scheme that allows for a representation of a potentially multimodal posterior PDF. It does this via proposal densities that lead to extra terms being added to the model equations and means the advantage of the traditional data assimilation schemes, in generating predominantly balanced initial conditions, is no longer guaranteed. This paper looks in detail at the impact the equivalent-weights particle filter has on dynamical balance and gravity wave generation in a primitive equation model. The primary conclusions are that (i) provided the model error covariance matrix imposes geostrophic balance, then each additional term required by the equivalent-weights particle filter is also geostrophically balanced; (ii) the relaxation term required to ensure the particles are in the locality of the observations has little effect on gravity waves and actually induces a reduction in gravity wave energy if sufficiently large; and (iii) the equivalent-weights term, which leads to the particles having equivalent significance in the posterior PDF, produces a change in gravity wave energy comparable to the stochastic model error. Thus, the scheme does not produce significant spurious gravity wave energy and so has potential for application in real high-dimensional geophysical applications.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
The present work describes a new tool that helps bidders improve their competitive bidding strategies. This new tool consists of an easy-to-use graphical tool that allows the use of more complex decision analysis tools in the field of Competitive Bidding. The graphic tool described here tries to move away from previous bidding models which attempt to describe the result of an auction or a tender process by means of studying each possible bidder with probability density functions. As an illustration, the tool is applied to three practical cases. Theoretical and practical conclusions on the great potential breadth of application of the tool are also presented.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We use geomagnetic activity data to study the rise and fall over the past century of the solar wind flow speed VSW, the interplanetary magnetic field strength B, and the open solar flux FS. Our estimates include allowance for the kinematic effect of longitudinal structure in the solar wind flow speed. As well as solar cycle variations, all three parameters show a long-term rise during the first half of the 20th century followed by peaks around 1955 and 1986 and then a recent decline. Cosmogenic isotope data reveal that this constitutes a grand maximum of solar activity which began in 1920, using the definition that such grand maxima are when 25-year averages of the heliospheric modulation potential exceeds 600 MV. Extrapolating the linear declines seen in all three parameters since 1985, yields predictions that the grand maximum will end in the years 2013, 2014, or 2027 using VSW, FS, or B, respectively. These estimates are consistent with predictions based on the probability distribution of the durations of past grand solar maxima seen in cosmogenic isotope data. The data contradict any suggestions of a floor to the open solar flux: we show that the solar minimum open solar flux, kinematically corrected to allow for the excess flux effect, has halved over the past two solar cycles.
Resumo:
Consumption of oily fish and fish oils is associated with protection against cardiovascular disease. Paradoxically, long-chain polyunsaturated fatty acids present in low-density lipoprotein (LDL) are suggested to be susceptible to oxidation. It is not clear whether eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) have similar effects on the susceptibility of LDL to oxidation or whether they affect the thrombogenicity of oxidized LDL. This study examined the influence of highly purified preparations of EPA and DHA on LDL oxidizability and LDL-supported thrombin generation in healthy human volunteers. Forty-two healthy volunteers were randomly assigned to receive olive oil (placebo), an EPA-rich oil or a DHA-rich oil for 4 weeks at a dose of 9 g oil/day. EPA and DHA were incorporated into LDL phospholipids and cholesteryl esters during the supplementation period, but were progressively lost during ex vivo copper-mediated oxidation. Following supplementation, the EPA treatment significantly increased the formation of conjugated dienes during LDL oxidation compared with baseline, whereas the DHA treatment had no effect. Neither treatment significantly affected the lag time for oxidation, oxidation rate during the propagation phase or maximum diene production. Neither EPA nor DHA significantly affected the thrombotic tendency of oxidized LDL compared with the placebo, although DHA tended to decrease it. In conclusion, there are subtle differences in the effects of EPA and DHA on the oxidizability and thrombogenicity of LDL. DHA does not appear to increase the susceptibility of LDL to oxidation to the same degree as EPA and has a tendency to decrease LDL-supported thrombin generation. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Recent research has suggested that relatively cold UK winters are more common when solar activity is low (Lockwood et al 2010 Environ. Res. Lett. 5 024001). Solar activity during the current sunspot minimum has fallen to levels unknown since the start of the 20th century (Lockwood 2010 Proc. R. Soc. A 466 303–29) and records of past solar variations inferred from cosmogenic isotopes (Abreu et al 2008 Geophys. Res. Lett. 35 L20109) and geomagnetic activity data (Lockwood et al 2009 Astrophys. J. 700 937–44) suggest that the current grand solar maximum is coming to an end and hence that solar activity can be expected to continue to decline. Combining cosmogenic isotope data with the long record of temperatures measured in central England, we estimate how solar change could influence the probability in the future of further UK winters that are cold, relative to the hemispheric mean temperature, if all other factors remain constant. Global warming is taken into account only through the detrending using mean hemispheric temperatures. We show that some predictive skill may be obtained by including the solar effect.
Resumo:
A standard CDMA system is considered and an extension of Pearson's results is used to determine the density function of the interference. The method is shown to work well in some cases, but not so in others. However this approach can be useful in further determining the probability of error of the system with minimal computational requirements.
Resumo:
Airborne high resolution in situ measurements of a large set of trace gases including ozone (O3) and total water (H2O) in the upper troposphere and the lowermost stratosphere (UT/LMS) have been performed above Europe within the SPURT project. SPURT provides an extensive data coverage of the UT/LMS in each season within the time period between November 2001 and July 2003. In the LMS a distinct spring maximum and autumn minimum is observed in O3, whereas its annual cycle in the UT is shifted by 2–3 months later towards the end of the year. The more variable H2O measurements reveal a maximum during summer and a minimum during autumn/winter with no phase shift between the two atmospheric compartments. For a comprehensive insight into trace gas composition and variability in the UT/LMS several statistical methods are applied using chemical, thermal and dynamical vertical coordinates. In particular, 2-dimensional probability distribution functions serve as a tool to transform localised aircraft data to a more comprehensive view of the probed atmospheric region. It appears that both trace gases, O3 and H2O, reveal the most compact arrangement and are best correlated in the view of potential vorticity (PV) and distance to the local tropopause, indicating an advanced mixing state on these surfaces. Thus, strong gradients of PV seem to act as a transport barrier both in the vertical and the horizontal direction. The alignment of trace gas isopleths reflects the existence of a year-round extra-tropical tropopause transition layer. The SPURT measurements reveal that this layer is mainly affected by stratospheric air during winter/spring and by tropospheric air during autumn/summer. Normalised mixing entropy values for O3 and H2O in the LMS appear to be maximal during spring and summer, respectively, indicating highest variability of these trace gases during the respective seasons.
Resumo:
In this paper, the concept of available potential energy (APE) density is extended to a multicomponent Boussinesq fluid with a nonlinear equation of state. As shown by previous studies, the APE density is naturally interpreted as the work against buoyancy forces that a parcel needs to perform to move from a notional reference position at which its buoyancy vanishes to its actual position; because buoyancy can be defined relative to an arbitrary reference state, so can APE density. The concept of APE density is therefore best viewed as defining a class of locally defined energy quantities, each tied to a different reference state, rather than as a single energy variable. An important result, for which a new proof is given, is that the volume integrated APE density always exceeds Lorenz’s globally defined APE, except when the reference state coincides with Lorenz’s adiabatically re-arranged reference state of minimum potential energy. A parcel reference position is systematically defined as a level of neutral buoyancy (LNB): depending on the nature of the fluid and on how the reference state is defined, a parcel may have one, none, or multiple LNB within the fluid. Multiple LNB are only possible for a multicomponent fluid whose density depends on pressure. When no LNB exists within the fluid, a parcel reference position is assigned at the minimum or maximum geopotential height. The class of APE densities thus defined admits local and global balance equations, which all exhibit a conversion with kinetic energy, a production term by boundary buoyancy fluxes, and a dissipation term by internal diffusive effects. Different reference states alter the partition between APE production and dissipation, but neither affect the net conversion between kinetic energy and APE, nor the difference between APE production and dissipation. We argue that the possibility of constructing APE-like budgets based on reference states other than Lorenz’s reference state is more important than has been previously assumed, and we illustrate the feasibility of doing so in the context of an idealised and realistic oceanic example, using as reference states one with constant density and another one defined as the horizontal mean density field; in the latter case, the resulting APE density is found to be a reasonable approximation of the APE density constructed from Lorenz’s reference state, while being computationally cheaper.
Resumo:
We consider evaluating the UK Monetary Policy Committee's inflation density forecasts using probability integral transform goodness-of-fit tests. These tests evaluate the whole forecast density. We also consider whether the probabilities assigned to inflation being in certain ranges are well calibrated, where the ranges are chosen to be those of particular relevance to the MPC, given its remit of maintaining inflation rates in a band around per annum. Finally, we discuss the decision-based approach to forecast evaluation in relation to the MPC forecasts
Resumo:
Techniques are proposed for evaluating forecast probabilities of events. The tools are especially useful when, as in the case of the Survey of Professional Forecasters (SPF) expected probability distributions of inflation, recourse cannot be made to the method of construction in the evaluation of the forecasts. The tests of efficiency and conditional efficiency are applied to the forecast probabilities of events of interest derived from the SPF distributions, and supplement a whole-density evaluation of the SPF distributions based on the probability integral transform approach.
Resumo:
The National Center for Atmospheric Research-Community Climate System Model (NCAR-CCSM) is used in a coupled atmosphere–ocean–sea-ice simulation of the Last Glacial Maximum (LGM, around 21,000 years ago) climate. In the tropics, the simulation shows a moderate cooling of 3 °C over land and 2 °C in the ocean in zonal average. This cooling is about 1 °C cooler than the CLIMAP sea surface temperatures (SSTs) but consistent with recent estimates of both land and sea surface temperature changes. Subtropical waters are cooled by 2–2.5 °C, also in agreement with recent estimates. The simulated oceanic thermohaline circulation at the LGM is not only shallower but also weaker than the modern with a migration of deep-water formation site in the North Atlantic as suggested by the paleoceanographic evidences. The simulated northward flow of Antarctic Bottom Water (AABW) is enhanced. These deep circulation changes are attributable to the increased surface density flux in the Southern Ocean caused by sea-ice expansion at the LGM. Both the Gulf Stream and the Kuroshio are intensified due to the overall increase of wind stress over the subtropical oceans. The intensified zonal wind stress and southward shift of its maximum in the Southern Ocean effectively enhances the transport of the Antarctic Circumpolar Current (ACC) by more than 50%. Simulated SSTs are lowered by up to 8 °C in the midlatitudes. Simulated conditions in the North Atlantic are warmer and with less sea-ice than indicated by CLIMAP again, in agreement with more recent estimates. The increased meridional SST gradient at the LGM results in an enhanced Hadley Circulation and increased midlatitude storm track precipitation. The increased baroclinic storm activity also intensifies the meridional atmospheric heat transport. A sensitivity experiment shows that about half of the simulated tropical cooling at the LGM originates from reduced atmospheric concentrations of greenhouse gases.
Resumo:
We give an a posteriori analysis of a semidiscrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics, which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (2014, SIAM J. Math. Anal., 46, 3518–3539). This framework allows energy-type arguments to be applied to continuous functions. Since we advocate the use of discontinuous Galerkin methods we make use of two families of reconstructions, one set of discrete reconstructions and a set of elliptic reconstructions to apply the reduced relative entropy framework in this setting.