921 resultados para Precipitation forecasting
Resumo:
Wider economic benefits resulting from extended geographical mobility is one argument for investments in high-speed rail. More specifically, the argument for high-speed trains in Sweden has been that they can help to further spatially extend labor market regions which in turn has a positive effect on growth and development. In this paper the aim is to cartographically visualize the potential size of the labor markets in areas that could be affected by possible future high-speed trains. The visualization is based on the forecasts of labor mobility with public transport made by the Swedish national mobility transport forecasting tool, SAMPERS, for two alternative high-speed rail scenarios. The analysis, not surprisingly, suggests that the largest impact of high-speed trains results in the area where the future high speed rail tracks are planned to be built. This expected effect on local labor market regions of high-speed trains could mean that possible regional economic development effects also are to be expected in this area. However, the results, in general, from the SAMPERS forecasts indicaterelatively small increases in local labor market potentials.
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.
Resumo:
http://digitalcommons.colby.edu/atlasofmaine2005/1011/thumbnail.jpg
Resumo:
Jakarta is vulnerable to flooding mainly caused by prolonged and heavy rainfall and thus a robust hydrological modeling is called for. A good quality of spatial precipitation data is therefore desired so that a good hydrological model could be achieved. Two types of rainfall sources are available: satellite and gauge station observations. At-site rainfall is considered to be a reliable and accurate source of rainfall. However, the limited number of stations makes the spatial interpolation not very much appealing. On the other hand, the gridded rainfall nowadays has high spatial resolution and improved accuracy, but still, relatively less accurate than its counterpart. To achieve a better precipitation data set, the study proposes cokriging method, a blending algorithm, to yield the blended satellite-gauge gridded rainfall at approximately 10-km resolution. The Global Satellite Mapping of Precipitation (GSMaP, 0.1⁰×0.1⁰) and daily rainfall observations from gauge stations are used. The blended product is compared with satellite data by cross-validation method. The newly-yield blended product is then utilized to re-calibrate the hydrological model. Several scenarios are simulated by the hydrological models calibrated by gauge observations alone and blended product. The performance of two calibrated hydrological models is then assessed and compared based on simulated and observed runoff.
Resumo:
Hydrological loss is a vital component in many hydrological models, which are usedin forecasting floods and evaluating water resources for both surface and subsurface flows. Due to the complex and random nature of the rainfall runoff process, hydrological losses are not yet fully understood. Consequently, practitioners often use representative values of the losses for design applications such as rainfall-runoff modelling which has led to inaccurate quantification of water quantities in the resulting applications. The existing hydrological loss models must be revisited and modellers should be encouraged to utilise other available data sets. This study is based on three unregulated catchments situated in Mt. Lofty Ranges of South Australia (SA). The paper focuses on conceptual models for: initial loss (IL), continuing loss (CL) and proportional loss (PL) with rainfall characteristics (total rainfall (TR) and storm duration (D)), and antecedent wetness (AW) conditions. The paper introduces two methods that can be implemented to estimate IL as a function of TR, D and AW. The IL distribution patterns and parameters for the study catchments are determined using multivariate analysis and descriptive statistics. The possibility of generalising the methods and the limitations of this are also discussed. This study will yield improvements to existing loss models and will encourage practitioners to utilise multiple data sets to estimate losses, instead of using hypothetical or representative values to generalise real situations.
Resumo:
As a highly urbanized and flood prone region, Flanders has experienced multiple floods causing significant damage in the past. In response to the floods of 1998 and 2002 the Flemish Environment Agency, responsible for managing 1 400 km of unnavigable rivers, started setting up a real time flood forecasting system in 2003. Currently the system covers almost 2 000 km of unnavigable rivers, for which flood forecasts are accessible online (www.waterinfo.be). The forecasting system comprises more than 1 000 hydrologic and 50 hydrodynamic models which are supplied with radar rainfall, rainfall forecasts and on-site observations. Forecasts for the next 2 days are generated hourly, while 10 day forecasts are generated twice a day. Additionally, twice daily simulations based on percentile rainfall forecasts (from EPS predictions) result in uncertainty bands for the latter. Subsequent flood forecasts use the most recent rainfall predictions and observed parameters at any time while uncertainty on the longer-term is taken into account. The flood forecasting system produces high resolution dynamic flood maps and graphs at about 200 river gauges and more than 3 000 forecast points. A customized emergency response system generates phone calls and text messages to a team of hydrologists initiating a pro-active response to prevent upcoming flood damage. The flood forecasting system of the Flemish Environment Agency is constantly evolving and has proven to be an indispensable tool in flood crisis management. This was clearly the case during the November 2010 floods, when the agency issued a press release 2 days in advance allowing water managers, emergency services and civilians to take measures.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.
Resumo:
Trata da nova metodologia de planejamento colaborativo, previsão e reabastecimento, conhecida pela sigla CPFR. Aborda as principais lacunas das metodologias tradicionais, as oportunidades de negócios geradas, o modelo de negócios proposto pelo CPF R e suas etapas de implementação, as implicações sobre a organização, os principais problemas de implementação, metodologias e ferramentas de integração presentes nas empresas que utilizam o CPFR. Aponta oportunidades geradas pelo CPFR e características de integração presentes nas empresas que já utilizam o conceito.
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian inflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in different measures of forecasting accuracy are substantial, especially for short horizons.