891 resultados para wave forecasting
Resumo:
We study the quantum dynamics of a two-mode Bose-Einstein condensate in a time-dependent symmetric double-well potential using analytical and numerical methods. The effects of internal degrees of freedom on the visibility of interference fringes during a stage of ballistic expansion are investigated varying particle number, nonlinear interaction sign and strength, as well as tunneling coupling. Expressions for the phase resolution are derived and the possible enhancement due to squeezing is discussed. In particular, the role of the superfluid-Mott insulator crossover and its analog for attractive interactions is recognized.
Resumo:
Wider economic benefits resulting from extended geographical mobility is one argument for investments in high-speed rail. More specifically, the argument for high-speed trains in Sweden has been that they can help to further spatially extend labor market regions which in turn has a positive effect on growth and development. In this paper the aim is to cartographically visualize the potential size of the labor markets in areas that could be affected by possible future high-speed trains. The visualization is based on the forecasts of labor mobility with public transport made by the Swedish national mobility transport forecasting tool, SAMPERS, for two alternative high-speed rail scenarios. The analysis, not surprisingly, suggests that the largest impact of high-speed trains results in the area where the future high speed rail tracks are planned to be built. This expected effect on local labor market regions of high-speed trains could mean that possible regional economic development effects also are to be expected in this area. However, the results, in general, from the SAMPERS forecasts indicaterelatively small increases in local labor market potentials.
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.
Resumo:
I begin by citing a definition of "third wave" from the glossary in Turbo Chicks: Talking Young Feminisms at length because it communicates several key issues that I develop in this project. The definition introduces a tension within "third wave" feminism of building and differentiating itself from second wave feminism, the newness of the term "third wave," its association with "young" women, complexity of contemporary feminisms, and attention to multiple identities and oppressions. Uncovering explanations of "third wave" feminism that go beyond, like this one, generational associations, is not an easy task. Authors consistently group new feminist voices together by age under the label "third wave" feminists without questioning the accuracy of the designation. Most explorations of "third wave" feminism overlook the complexities and distinctions that abound among "young" feminists ; not all young feminists espouse similar ideas, tactics, and actions; and for various reasons, not all young feminists identify with a "third wave" of feminism. Less than a year after I began to learn about feminism I discovered Barbara Findlen's Listen Up: Voices From the Next Feminist Generation. Although the collection nor its contributors declare association with "third wave" feminism, consequent reviews and citations in articles identify it, along with Rebecca Walker's To Be Real: Telling the Truth and Changing the Voice of Feminism, as a major text of "third wave" feminism. Re-reading Listen Up since beginning to research "third wave" feminism, I now understand its fundamental influence on my research questions as a starting point for assessing persistent exclusion in contemporary feminism, rather than as a revolutionary text (as it is claimed to be in many reviews). Findlen begins the introduction with the bold claim, "My feminism wasn't shaped by antiwar or civil rights activism ..." (xi). Framing the collection with a disavowal of the influence women of color's organizational efforts negates, for me, the project's proclaimed commitment to multivocality. Though several contributions examine persistent exclusion within contemporary feminist movement, the larger project seems to rely on these essays to reflect this commitment, suggesting that Listen Up does not go beyond the "add and stir" approach to "diversity." Interestingly, this statement does not appear in the new edition of Listen Up published in 2001. And the content has changed with this new edition, including several more Latina contributors and other "corrective" additions.
Resumo:
As a highly urbanized and flood prone region, Flanders has experienced multiple floods causing significant damage in the past. In response to the floods of 1998 and 2002 the Flemish Environment Agency, responsible for managing 1 400 km of unnavigable rivers, started setting up a real time flood forecasting system in 2003. Currently the system covers almost 2 000 km of unnavigable rivers, for which flood forecasts are accessible online (www.waterinfo.be). The forecasting system comprises more than 1 000 hydrologic and 50 hydrodynamic models which are supplied with radar rainfall, rainfall forecasts and on-site observations. Forecasts for the next 2 days are generated hourly, while 10 day forecasts are generated twice a day. Additionally, twice daily simulations based on percentile rainfall forecasts (from EPS predictions) result in uncertainty bands for the latter. Subsequent flood forecasts use the most recent rainfall predictions and observed parameters at any time while uncertainty on the longer-term is taken into account. The flood forecasting system produces high resolution dynamic flood maps and graphs at about 200 river gauges and more than 3 000 forecast points. A customized emergency response system generates phone calls and text messages to a team of hydrologists initiating a pro-active response to prevent upcoming flood damage. The flood forecasting system of the Flemish Environment Agency is constantly evolving and has proven to be an indispensable tool in flood crisis management. This was clearly the case during the November 2010 floods, when the agency issued a press release 2 days in advance allowing water managers, emergency services and civilians to take measures.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.
Resumo:
Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
Resumo:
Parametric term structure models have been successfully applied to innumerous problems in fixed income markets, including pricing, hedging, managing risk, as well as studying monetary policy implications. On their turn, dynamic term structure models, equipped with stronger economic structure, have been mainly adopted to price derivatives and explain empirical stylized facts. In this paper, we combine flavors of those two classes of models to test if no-arbitrage affects forecasting. We construct cross section (allowing arbitrages) and arbitrage-free versions of a parametric polynomial model to analyze how well they predict out-of-sample interest rates. Based on U.S. Treasury yield data, we find that no-arbitrage restrictions significantly improve forecasts. Arbitrage-free versions achieve overall smaller biases and Root Mean Square Errors for most maturities and forecasting horizons. Furthermore, a decomposition of forecasts into forward-rates and holding return premia indicates that the superior performance of no-arbitrage versions is due to a better identification of bond risk premium.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.
Resumo:
Trata da nova metodologia de planejamento colaborativo, previsão e reabastecimento, conhecida pela sigla CPFR. Aborda as principais lacunas das metodologias tradicionais, as oportunidades de negócios geradas, o modelo de negócios proposto pelo CPF R e suas etapas de implementação, as implicações sobre a organização, os principais problemas de implementação, metodologias e ferramentas de integração presentes nas empresas que utilizam o CPFR. Aponta oportunidades geradas pelo CPFR e características de integração presentes nas empresas que já utilizam o conceito.
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties as well as the traditional ones. We suggest a new two-step model selection procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank using our proposed procedure, relative to an unrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting the lag-length only and then testing for cointegration. Two empirical applications forecasting Brazilian inflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of the model-selection strategy proposed here. The gains in different measures of forecasting accuracy are substantial, especially for short horizons.
Resumo:
This paper studies the electricity hourly load demand in the area covered by a utility situated in the southeast of Brazil. We propose a stochastic model which employs generalized long memory (by means of Gegenbauer processes) to model the seasonal behavior of the load. The model is proposed for sectional data, that is, each hour’s load is studied separately as a single series. This approach avoids modeling the intricate intra-day pattern (load profile) displayed by the load, which varies throughout days of the week and seasons. The forecasting performance of the model is compared with a SARIMA benchmark using the years of 1999 and 2000 as the out-of-sample. The model clearly outperforms the benchmark. We conclude for general long memory in the series.