880 resultados para Forecasting and replenishment (CPFR)
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
The statistical properties and skill in predictions of objectively identified and tracked cyclonic features (frontal waves and cyclones) are examined in MOGREPS-15, the global 15-day version of the Met Office Global and Regional Ensemble Prediction System (MOGREPS). The number density of cyclonic features is found to decline with increasing lead-time, with analysis fields containing weak features which are not sustained past the first day of the forecast. This loss of cyclonic features is associated with a decline in area averaged enstrophy with increasing lead time. Both feature number density and area averaged enstrophy saturate by around 7 days into the forecast. It is found that the feature number density and area averaged enstrophy of forecasts produced using model versions that include stochastic energy backscatter saturate at higher values than forecasts produced without stochastic physics. The ability of MOGREPS-15 to predict the locations of cyclonic features of different strengths is evaluated at different spatial scales by examining the Brier Skill (relative to the analysis climatology) of strike probability forecasts: the probability that a cyclonic feature center is located within a specified radius. The radius at which skill is maximised increases with lead time from 650km at 12h to 950km at 7 days. The skill is greatest for the most intense features. Forecast skill remains above zero at these scales out to 14 days for the most intense cyclonic features, but only out to 8 days when all features are included irrespective of intensity.
Resumo:
The effects of data uncertainty on real-time decision-making can be reduced by predicting early revisions to US GDP growth. We show that survey forecasts efficiently anticipate the first-revised estimate of GDP, but that forecasting models incorporating monthly economic indicators and daily equity returns provide superior forecasts of the second-revised estimate. We consider the implications of these findings for analyses of the impact of surprises in GDP revision announcements on equity markets, and for analyses of the impact of anticipated future revisions on announcement-day returns.
Resumo:
We consider the extent to which long-horizon survey forecasts of consumption, investment and output growth are consistent with theory-based steady-state values, and whether imposing these restrictions on long-horizon forecasts will enhance their accuracy. The restrictions we impose are consistent with a two-sector model in which the variables grow at different rates in steady state. The restrictions are imposed by exponential-tilting of simple auxiliary forecast densities. We show that imposing the consumption-output restriction yields modest improvements in the long-horizon output growth forecasts, and larger improvements in the forecasts of the cointegrating combination of consumption and output: the transformation of the data on which accuracy is assessed plays an important role.
Resumo:
Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.
Investigating the relationship between Eurasian snow and the Arctic Oscillation with data and models
Resumo:
Recent research suggests Eurasian snow-covered area (SCA) influences the Arctic Oscillation (AO) via the polar vortex. This could be important for Northern Hemisphere winter season forecasting. A fairly strong negative correlation between October SCA and the AO, based on both monthly and daily observational data, has been noted in the literature. While reproducing these previous links when using the same data, we find no further evidence of the link when using an independent satellite data source, or when using a climate model.
Resumo:
In this paper we assess opinion polls, prediction markets, expert opinion and statistical modelling over a large number of US elections in order to determine which perform better in terms of forecasting outcomes. In line with existing literature, we bias-correct opinion polls. We consider accuracy, bias and precision over different time horizons before an election, and we conclude that prediction markets appear to provide the most precise forecasts and are similar in terms of bias to opinion polls. We find that our statistical model struggles to provide competitive forecasts, while expert opinion appears to be of value. Finally we note that the forecast horizon matters; whereas prediction market forecasts tend to improve the nearer an election is, opinion polls appear to perform worse, while expert opinion performs consistently throughout. We thus contribute to the growing literature comparing election forecasts of polls and prediction markets.
Resumo:
Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.
Resumo:
There are a number of factors that lead to non-linearity between precipitation anomalies and flood hazard; this non-linearity is a pertinent issue for applications that use a precipitation forecast as a proxy for imminent flood hazard. We assessed the degree of this non-linearity for the first time using a recently developed global-scale hydrological model driven by the ERA-Interim Land precipitation reanalysis (1980–2010). We introduced new indices to assess large-scale flood hazard, or floodiness, and quantified the link between monthly precipitation, river discharge and floodiness anomalies at the global and regional scales. The results show that monthly floodiness is not well correlated with precipitation, therefore demonstrating the value of hydrometeorological systems for providing floodiness forecasts for decision-makers. A method is described for forecasting floodiness using the Global Flood Awareness System, building a climatology of regional floodiness from which to forecast floodiness anomalies out to two weeks.
Resumo:
This paper characterizes the dynamics of jumps and analyzes their importance for volatility forecasting. Using high-frequency data on four prominent energy markets, we perform a model-free decomposition of realized variance into its continuous and discontinuous components. We find strong evidence of jumps in energy markets between 2007 and 2012. We then investigate the importance of jumps for volatility forecasting. To this end, we estimate and analyze the predictive ability of several Heterogenous Autoregressive (HAR) models that explicitly capture the dynamics of jumps. Conducting extensive in-sample and out-of-sample analyses, we establish that explicitly modeling jumps does not significantly improve forecast accuracy. Our results are broadly consistent across our four energy markets, forecasting horizons, and loss functions
Resumo:
Iso-score curves graph (iSCG) and mathematical relationships between Scoring Parameters (SP) and Forecasting Parameters (FP) can be used in Economic Scoring Formulas (ESF) used in tendering to distribute the score among bidders in the economic part of a proposal. Each contracting authority must set an ESF when publishing tender specifications and the strategy of each bidder will differ depending on the ESF selected and the weight of the overall proposal scoring. The various mathematical relationships and density distributions that describe the main SPs and FPs, and the representation of tendering data by means of iSCGs, enable the generation of two new types of graphs that can be very useful for bidders who want to be more competitive: the scoring and position probability graphs.
Resumo:
Recent developments in the area of Bid Tender Forecasting have enabled bidders to implement new types of easy-to-use tools for increasing their chances of winning contracts. Although these new tools (such as iso-Score Curve Graphs, Scoring Probability Graphs, and Position Probability Graphs) are designed for bidders in capped tendering (tenders with an upper price limit), some of their principles can also be applied by a Contracting Authority to detect which bidders do not follow a standard pattern, that is, their bids are extremely high or low. Since a collusive bid generally needs to be sufficiently high or low to make an impact on the bid distribution, any person in charge of supervising capped tenders can be alerted to any bidder that might be involved in a cartel after identifying the same abnormal behavior in a series of tenders through simple calculations and a new type of graph.
Resumo:
We assess Indian summer monsoon seasonal forecasts in GloSea5-GC2, the Met Office fully coupled subseasonal to seasonal ensemble forecasting system. Using several metrics, GloSea5-GC2 shows similar skill to other state-of-the-art forecast systems. The prediction skill of the large-scale South Asian monsoon circulation is higher than that of Indian monsoon rainfall. Using multiple linear regression analysis we evaluate relationships between Indian monsoon rainfall and five possible drivers of monsoon interannual variability. Over the time period studied (1992-2011), the El Nino-Southern Oscillation (ENSO) and the Indian Ocean dipole (IOD) are the most important of these drivers in both observations and GloSea5-GC2. Our analysis indicates that ENSO and its teleconnection with the Indian rainfall are well represented in GloSea5-GC2. However, the relationship between the IOD and Indian rainfall anomalies is too weak in GloSea5-GC2, which may be limiting the prediction skill of the local monsoon circulation and Indian rainfall. We show that this weak relationship likely results from a coupled mean state bias that limits the impact of anomalous wind forcing on SST variability, resulting in erroneous IOD SST anomalies. Known difficulties in representing convective precipitation over India may also play a role. Since Indian rainfall responds weakly to the IOD, it responds more consistently to ENSO than in observations. Our assessment identifies specific coupled biases that are likely limiting GloSea5-GC2 prediction skill, providing targets for model improvement.
Resumo:
TIGGE was a major component of the THORPEX (The Observing System Research and Predictability Experiment) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics. The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a Multi-model Grand Ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed. TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world, and are a focus of multi-model ensemble research. Their extra-tropical transition also has a major impact on skill of mid-latitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extra-tropical cyclones and storm tracks. Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles. Finally the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Resumo:
A simple polynya flux model driven by standard atmospheric forcing is used to investigate the ice formation that took place during an exceptionally strong and consistent western New Siberian (WNS) polynya event in 2004 in the Laptev Sea. Whether formation rates are high enough to erode the stratification of the water column beneath is examined by adding the brine released during the 2004 polynya event to the average winter density stratification of the water body, preconditioned by summers with a cyclonic atmospheric forcing (comparatively weakly stratified water column). Beforehand, the model performance is tested through a simulation of a well‐documented event in April 2008. Neglecting the replenishment of water masses by advection into the polynya area, we find the probability for the occurrence of density‐driven convection down to the bottom to be low. Our findings can be explained by the distinct vertical density gradient that characterizes the area of the WNS polynya and the apparent lack of extreme events in the eastern Laptev Sea. The simple approach is expected to be sufficiently rigorous, since the simulated event is exceptionally strong and consistent, the ice production and salt rejection rates are likely to be overestimated, and the amount of salt rejected is distrusted over a comparatively weakly stratified water column. We conclude that the observed erosion of the halocline and formation of vertically mixed water layers during a WNS polynya event is therefore predominantly related to wind‐ and tidally driven turbulent mixing processes.