367 resultados para streamflow forecasts


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The variability of results from different automated methods of detection and tracking of extratropical cyclones is assessed in order to identify uncertainties related to the choice of method. Fifteen international teams applied their own algorithms to the same dataset—the period 1989–2009 of interim European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis (ERAInterim) data. This experiment is part of the community project Intercomparison of Mid Latitude Storm Diagnostics (IMILAST; see www.proclim.ch/imilast/index.html). The spread of results for cyclone frequency, intensity, life cycle, and track location is presented to illustrate the impact of using different methods. Globally, methods agree well for geographical distribution in large oceanic regions, interannual variability of cyclone numbers, geographical patterns of strong trends, and distribution shape for many life cycle characteristics. In contrast, the largest disparities exist for the total numbers of cyclones, the detection of weak cyclones, and distribution in some densely populated regions. Consistency between methods is better for strong cyclones than for shallow ones. Two case studies of relatively large, intense cyclones reveal that the identification of the most intense part of the life cycle of these events is robust between methods, but considerable differences exist during the development and the dissolution phases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is widely accepted that some of the most accurate Value-at-Risk (VaR) estimates are based on an appropriately specified GARCH process. But when the forecast horizon is greater than the frequency of the GARCH model, such predictions have typically required time-consuming simulations of the aggregated returns distributions. This paper shows that fast, quasi-analytic GARCH VaR calculations can be based on new formulae for the first four moments of aggregated GARCH returns. Our extensive empirical study compares the Cornish–Fisher expansion with the Johnson SU distribution for fitting distributions to analytic moments of normal and Student t, symmetric and asymmetric (GJR) GARCH processes to returns data on different financial assets, for the purpose of deriving accurate GARCH VaR forecasts over multiple horizons and significance levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article examines the potential to improve numerical weather prediction (NWP) by estimating upper and lower bounds on predictability by re-visiting the original study of Lorenz (1982) but applied to the most recent version of the European Centre for Medium Range Weather Forecasts (ECMWF) forecast system, for both the deterministic and ensemble prediction systems (EPS). These bounds are contrasted with an older version of the same NWP system to see how they have changed with improvements to the NWP system. The computations were performed for the earlier seasons of DJF 1985/1986 and JJA 1986 and the later seasons of DJF 2010/2011 and JJA 2011 using the 500-hPa geopotential height field. Results indicate that for this field, we may be approaching the limit of deterministic forecasting so that further improvements might only be obtained by improving the initial state. The results also show that predictability calculations with earlier versions of the model may overestimate potential forecast skill, which may be due to insufficient internal variability in the model and because recent versions of the model are more realistic in representing the true atmospheric evolution. The same methodology is applied to the EPS to calculate upper and lower bounds of predictability of the ensemble mean forecast in order to explore how ensemble forecasting could extend the limits of the deterministic forecast. The results show that there is a large potential to improve the ensemble predictions, but for the increased predictability of the ensemble mean, there will be a trade-off in information as the forecasts will become increasingly smoothed with time. From around the 10-d forecast time, the ensemble mean begins to converge towards climatology. Until this point, the ensemble mean is able to predict the main features of the large-scale flow accurately and with high consistency from one forecast cycle to the next. By the 15-d forecast time, the ensemble mean has lost information with the anomaly of the flow strongly smoothed out. In contrast, the control forecast is much less consistent from run to run, but provides more detailed (unsmoothed) but less useful information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this article is to improve the communication of the probabilistic flood forecasts generated by hydrological ensemble prediction systems (HEPS) by understanding perceptions of different methods of visualizing probabilistic forecast information. This study focuses on interexpert communication and accounts for differences in visualization requirements based on the information content necessary for individual users. The perceptions of the expert group addressed in this study are important because they are the designers and primary users of existing HEPS. Nevertheless, they have sometimes resisted the release of uncertainty information to the general public because of doubts about whether it can be successfully communicated in ways that would be readily understood to nonexperts. In this article, we explore the strengths and weaknesses of existing HEPS visualization methods and thereby formulate some wider recommendations about the best practice for HEPS visualization and communication. We suggest that specific training on probabilistic forecasting would foster use of probabilistic forecasts with a wider range of applications. The result of a case study exercise showed that there is no overarching agreement between experts on how to display probabilistic forecasts and what they consider the essential information that should accompany plots and diagrams. In this article, we propose a list of minimum properties that, if consistently displayed with probabilistic forecasts, would make the products more easily understandable. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Real-time estimates of output gaps and inflation gaps differ from the values that are obtained using data available long after the event. Part of the problem is that the data on which the real-time estimates are based is subsequently revised. We show that vector-autoregressive models of data vintages provide forecasts of post-revision values of future observations and of already-released observations capable of improving estimates of output and inflation gaps in real time. Our findings indicate that annual revisions to output and inflation data are in part predictable based on their past vintages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider forecasting with factors, variables and both, modeling in-sample using Autometrics so all principal components and variables can be included jointly, while tackling multiple breaks by impulse-indicator saturation. A forecast-error taxonomy for factor models highlights the impacts of location shifts on forecast-error biases. Forecasting US GDP over 1-, 4- and 8-step horizons using the dataset from Stock and Watson (2009) updated to 2011:2 shows factor models are more useful for nowcasting or short-term forecasting, but their relative performance declines as the forecast horizon increases. Forecasts for GDP levels highlight the need for robust strategies, such as intercept corrections or differencing, when location shifts occur as in the recent financial crisis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many macroeconomic series, such as U.S. real output growth, are sampled quarterly, although potentially useful predictors are often observed at a higher frequency. We look at whether a mixed data-frequency sampling (MIDAS) approach can improve forecasts of output growth. The MIDAS specification used in the comparison uses a novel way of including an autoregressive term. We find that the use of monthly data on the current quarter leads to significant improvement in forecasting current and next quarter output growth, and that MIDAS is an effective way to exploit monthly data compared with alternative methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The parameterisation of diabatic processes in numerical models is critical for the accuracy of weather forecasts and for climate projections. A novel approach to the evaluation of these processes in models is introduced in this contribution. The approach combines a suite of on-line tracer diagnostics with off-line trajectory calculations. Each tracer tracks accumulative changes in potential temperature associated with a particular parameterised diabatic process in the model. A comparison of tracers therefore allows the identification of the most active diabatic processes and their downstream impacts. The tracers are combined with trajectories computed using model-resolved winds, allowing the various diabatic contributions to be tracked back to their time and location of occurrence. We have used this approach to investigate diabatic processes within a simulated extratropical cyclone. We focus on the warm conveyor belt, in which the dominant diabatic contributions come from large-scale latent heating and parameterised convection. By contrasting two simulations, one with standard convection parameterisation settings and another with reduced parameterised convection, the effects of parameterised convection on the structure of the cyclone have been determined. Under reduced parameterised convection conditions, the large-scale latent heating is forced to release convective instability that would otherwise have been released by the convection parameterisation. Although the spatial distribution of precipitation depends on the details of the split between parameterised convection and large-scale latent heating, the total precipitation amount associated with the cyclone remains largely unchanged. For reduced parameterised convection, a more rapid and stronger latent heating episode takes place as air ascends within the warm conveyor belt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quality control, validation and verification of the European Flood Alert System (EFAS) are described. EFAS is designed as a flood early warning system at pan-European scale, to complement national systems and provide flood warnings more than 2 days before a flood. On average 20–30 alerts per year are sent out to the EFAS partner network which consists of 24 National hydrological authorities responsible for transnational river basins. Quality control of the system includes the evaluation of the hits, misses and false alarms, showing that EFAS has more than 50% of the time hits. Furthermore, the skills of both the meteorological as well as the hydrological forecasts are evaluated, and are included here for a 10-year period. Next, end-user needs and feedback are systematically analysed. Suggested improvements, such as real-time river discharge updating, are currently implemented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper the properties of a hydro-meteorological forecasting system for forecasting river flows have been analysed using a probabilistic forecast convergence score (FCS). The focus on fixed event forecasts provides a forecaster's approach to system behaviour and adds an important perspective to the suite of forecast verification tools commonly used in this field. A low FCS indicates a more consistent forecast. It can be demonstrated that the FCS annual maximum decreases over the last 10 years. With lead time, the FCS of the ensemble forecast decreases whereas the control and high resolution forecast increase. The FCS is influenced by the lead time, threshold and catchment size and location. It indicates that one should use seasonality based decision rules to issue flood warnings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a case study using the TIGGE database for flood warning in the Upper Huai catchment (ca. 30 672 km2). TIGGE ensemble forecasts from 6 meteorological centres with 10-day lead time were extracted and disaggregated to drive the Xinanjiang model to forecast discharges for flood events in July-September 2008. The results demonstrated satisfactory flood forecasting skills with clear signals of floods up to 10 days in advance. The forecasts occasionally show discrepancies both in time and space. Forecasting quality could potentially be improved by using temporal and spatial corrections of the forecasted precipitation.