29 resultados para Forecasting and replenishment (CPFR)
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for a particular type of diffuse, for Minnesota-type and for hierarchical priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.
Resumo:
This special issue of Natural Hazards and Earth System Sciences (NHESS) contains eight papers presented as oral or poster contributions in the Natural Hazards NH-1.2 session on"Extreme events induced by weather and climate change: evaluation, forecasting and proactive planning", held at the European Geosciences Union (EGU) General Assembly in Vienna, Austria, on 13-18 April 2008. The aim of the session was to provide an international forum for presenting new results and for discussing innovative ideas and concepts on extreme hydro-meteorological events, including: (i) the assessment of the risk posed by the extreme events, (ii) the expected changes in the frequency and intensity of the events driven by a changing climate and by multiple human- induced causes, (iii) new modelling approaches and original forecasting methods to predict extreme events and their consequences, and (iv) strategies for hazard mitigation and risk reduction, and for a improved adaptation to extreme hydro-meteorological events ...
Resumo:
We hypothesized that the analysis of mRNA level and activity of key enzymes in amino acid and carbohydrate metabolism in a feeding/fasting/refeeding setting could improve our understanding of how a carnivorous fish, like the European seabass (Dicentrarchus labrax), responds to changes in dietary intake at the hepatic level. To this end cDNA fragments encoding genes for cytosolic and mitochondrial alanine aminotransferase (cALT; mALT), pyruvate kinase (PK), glucose 6-phosphate dehydrogenase (G6PDH) and 6-phosphogluconate dehydrogenase (6PGDH) were cloned and sequenced. Measurement of mRNA levels through quantitative real-time PCR performed in livers of fasted seabass revealed a significant increase in cALT (8.5-fold induction)while promoting a drastic 45-fold down-regulation of PK in relation to the levels found in fed seabass. These observations were corroborated by enzyme activity meaning that during food deprivation an increase in the capacity of pyruvate generation happened via alanine to offset the reduction in pyruvate derived via glycolysis. After a 3-day refeeding period cALT returned to control levels while PK was not able to rebound. No alterations were detected in the expression levels of G6PDH while 6PGDH was revealed to be more sensitive specially to fasting, as confirmed by a significant 5.7-fold decrease in mRNA levels with no recovery after refeeding. Our results indicate that in early stages of refeeding, the liver prioritized the restoration of systemic normoglycemia and replenishment of hepatic glycogen. In a later stage, once regular feeding is re-established, dietary fuel may then be channeled to glycolysis and de novo lipogenesis.
Resumo:
We hypothesized that the analysis of mRNA level and activity of key enzymes in amino acid and carbohydrate metabolism in a feeding/fasting/refeeding setting could improve our understanding of how a carnivorous fish, like the European seabass (Dicentrarchus labrax), responds to changes in dietary intake at the hepatic level. To this end cDNA fragments encoding genes for cytosolic and mitochondrial alanine aminotransferase (cALT; mALT), pyruvate kinase (PK), glucose 6-phosphate dehydrogenase (G6PDH) and 6-phosphogluconate dehydrogenase (6PGDH) were cloned and sequenced. Measurement of mRNA levels through quantitative real-time PCR performed in livers of fasted seabass revealed a significant increase in cALT (8.5-fold induction)while promoting a drastic 45-fold down-regulation of PK in relation to the levels found in fed seabass. These observations were corroborated by enzyme activity meaning that during food deprivation an increase in the capacity of pyruvate generation happened via alanine to offset the reduction in pyruvate derived via glycolysis. After a 3-day refeeding period cALT returned to control levels while PK was not able to rebound. No alterations were detected in the expression levels of G6PDH while 6PGDH was revealed to be more sensitive specially to fasting, as confirmed by a significant 5.7-fold decrease in mRNA levels with no recovery after refeeding. Our results indicate that in early stages of refeeding, the liver prioritized the restoration of systemic normoglycemia and replenishment of hepatic glycogen. In a later stage, once regular feeding is re-established, dietary fuel may then be channeled to glycolysis and de novo lipogenesis.
Resumo:
I describe the customer valuations game, a simple intuitive game that can serve as a foundation for teaching revenue management. The game requires little or no preparation, props or software, takes around two hours (and hence can be finished in one session), and illustrates the formation of classical (airline and hotel) revenue management mechanisms such as advanced purchase discounts, booking limits and fixed multiple prices. I normally use the game as a base to introduce RM and to develop RM forecasting and optimization concepts off it. The game is particularly suited for non-technical audiences.
Resumo:
The main goal of this article is to provide an answer to the question: "Does anything forecast exchange rates, and if so, which variables?". It is well known thatexchange rate fluctuations are very difficult to predict using economic models, andthat a random walk forecasts exchange rates better than any economic model (theMeese and Rogoff puzzle). However, the recent literature has identified a series of fundamentals/methodologies that claim to have resolved the puzzle. This article providesa critical review of the recent literature on exchange rate forecasting and illustratesthe new methodologies and fundamentals that have been recently proposed in an up-to-date, thorough empirical analysis. Overall, our analysis of the literature and thedata suggests that the answer to the question: "Are exchange rates predictable?" is,"It depends" -on the choice of predictor, forecast horizon, sample period, model, andforecast evaluation method. Predictability is most apparent when one or more of thefollowing hold: the predictors are Taylor rule or net foreign assets, the model is linear, and a small number of parameters are estimated. The toughest benchmark is therandom walk without drift.
Resumo:
This technical report is a document prepared as a deliverable [D4.3 Report of the Interlinkages and forecasting prototype tool] of a EU project – DECOIN Project No. 044428 - FP6-2005-SSP-5A. The text is divided into 4 sections: (1) this short introductory section explains the purpose of the report; (2) the second section provides a general discussion of a systemic problem found in existing quantitative analysis of sustainability. It addresses the epistemological implications of complexity, which entails the need of dealing with the existence of Multiple-Scales and non-equivalent narratives (multiple dimensions/attributes) to be used to define sustainability issues. There is an unavoidable tension between a “steady-state view” (= the perception of what is going on now – reflecting a PAST --& PRESENT view of the reality) versus an “evolutionary view” (= the unknown transformation that we have to expect in the process of becoming of the observed reality and in the observer – reflecting a PRESENT --& FUTURE view of the reality). The section ends by listing the implications of these points on the choice of integrated packages of sustainability indicators; (3) the third section illustrates the potentiality of the DECOIN toolkit for the study of sustainability trade-offs and linkages across indicators using quantitative examples taken from cases study of another EU project (SMILE). In particular, this section starts by addressing the existence of internal constraints to sustainability (economic versus social aspects). The narrative chosen for this discussion focuses on the dark side of ageing and immigration on the economic viability of social systems. Then the section continues by exploring external constraints to sustainability (economic development vs the environment). The narrative chosen for this discussion focuses on the dark side of current strategy of economic development based on externalization and the “bubbles-disease”; (4) the last section presents a critical appraisal of the quality of energy data found in energy statistics. It starts with a discussion of the general goal of statistical accounting. Then it introduces the concept of multipurpose grammars. The second part uses the experience made in the activities of the DECOIN project to answer the question: how useful are EUROSTAT energy statistics? The answer starts with an analysis of basic epistemological problems associated with accounting of energy. This discussion leads to the acknowledgment of an important epistemological problem: the unavoidable bifurcations in the mechanism of accounting needed to generate energy statistics. By using numerical example the text deals with the following issues: (i) the pitfalls of the actual system of accounting in energy statistics; (ii) a critical appraisal of the actual system of accounting in BP statistics; (iii) a critical appraisal of the actual system of accounting in Eurostat statistics. The section ends by proposing an innovative method to represent energy statistics which can result more useful for those willing develop sustainability indicators.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
High-energy charged particles in the van Allen radiation belts and in solar energetic particle events can damage satellites on orbit leading to malfunctions and loss of satellite service. Here we describe some recent results from the SPACECAST project on modelling and forecasting the radiation belts, and modelling solar energetic particle events. We describe the SPACECAST forecasting system that uses physical models that include wave-particle interactions to forecast the electron radiation belts up to 3 h ahead. We show that the forecasts were able to reproduce the >2 MeV electron flux at GOES 13 during the moderate storm of 7-8 October 2012, and the period following a fast solar wind stream on 25-26 October 2012 to within a factor of 5 or so. At lower energies of 10- a few 100 keV we show that the electron flux at geostationary orbit depends sensitively on the high-energy tail of the source distribution near 10 RE on the nightside of the Earth, and that the source is best represented by a kappa distribution. We present a new model of whistler mode chorus determined from multiple satellite measurements which shows that the effects of wave-particle interactions beyond geostationary orbit are likely to be very significant. We also present radial diffusion coefficients calculated from satellite data at geostationary orbit which vary with Kp by over four orders of magnitude. We describe a new automated method to determine the position at the shock that is magnetically connected to the Earth for modelling solar energetic particle events and which takes into account entropy, and predict the form of the mean free path in the foreshock, and particle injection efficiency at the shock from analytical theory which can be tested in simulations.
Resumo:
Forecasting coal resources and reserves is critical for coal mine development. Thickness maps are commonly used for assessing coal resources and reserves; however they are limited for capturing coal splitting effects in thick and heterogeneous coal zones. As an alternative, three-dimensional geostatistical methods are used to populate facies distributionwithin a densely drilled heterogeneous coal zone in the As Pontes Basin (NWSpain). Coal distribution in this zone is mainly characterized by coal-dominated areas in the central parts of the basin interfingering with terrigenous-dominated alluvial fan zones at the margins. The three-dimensional models obtained are applied to forecast coal resources and reserves. Predictions using subsets of the entire dataset are also generated to understand the performance of methods under limited data constraints. Three-dimensional facies interpolation methods tend to overestimate coal resources and reserves due to interpolation smoothing. Facies simulation methods yield similar resource predictions than conventional thickness map approximations. Reserves predicted by facies simulation methods are mainly influenced by: a) the specific coal proportion threshold used to determine if a block can be recovered or not, and b) the capability of the modelling strategy to reproduce areal trends in coal proportions and splitting between coal-dominated and terrigenousdominated areas of the basin. Reserves predictions differ between the simulation methods, even with dense conditioning datasets. Simulation methods can be ranked according to the correlation of their outputs with predictions from the directly interpolated coal proportion maps: a) with low-density datasets sequential indicator simulation with trends yields the best correlation, b) with high-density datasets sequential indicator simulation with post-processing yields the best correlation, because the areal trends are provided implicitly by the dense conditioning data.
Resumo:
This paper evaluates the forecasting performance of a continuous stochastic volatility model with two factors of volatility (SV2F) and compares it to those of GARCH and ARFIMA models. The empirical results show that the volatility forecasting ability of the SV2F model is better than that of the GARCH and ARFIMA models, especially when volatility seems to change pattern. We use ex-post volatility as a proxy of the realized volatility obtained from intraday data and the forecasts from the SV2F are calculated using the reprojection technique proposed by Gallant and Tauchen (1998).
Resumo:
The contributions of this paper are twofold: On the one hand, the paper analyses the factors determining the growth in car ownership in Spain over the last two decades, and, on the other, the paper provides empirical evidence for a controversial methodological issue. From a methodological point of view, the paper compares the two alternative decision mechanisms used for modelling car ownership: ordered-response versus unordered-response mechanisms. A discrete choice model is estimated at three points in time: 1980, 1990 and 2000. The study concludes that on the basis of forecasting performance, the multinomial logit model and the ordered probit model are almost undistinguishable. As for the empirical results, it can be emphasised that income elasticity is not constant and declines as car ownership increases. Besides, households living in rural areas are less sensitive than those living in urban areas. Car ownership is also sensitive to the quality of public transport for those living in the largest cities. The results also confirmed the existence of a generation effect, which will vanish around the year 2020, a weak life-cycle effect, and a positive effect of employment on the number of cars per household. Finally, the change in the estimated coefficients over time reflects an increase in mobility needs and, consequently, an increase in car ownership.
Resumo:
Planners in public and private institutions would like coherent forecasts of the components of age-specic mortality, such as causes of death. This has been di cult toachieve because the relative values of the forecast components often fail to behave ina way that is coherent with historical experience. In addition, when the group forecasts are combined the result is often incompatible with an all-groups forecast. It hasbeen shown that cause-specic mortality forecasts are pessimistic when compared withall-cause forecasts (Wilmoth, 1995). This paper abandons the conventional approachof using log mortality rates and forecasts the density of deaths in the life table. Sincethese values obey a unit sum constraint for both conventional single-decrement life tables (only one absorbing state) and multiple-decrement tables (more than one absorbingstate), they are intrinsically relative rather than absolute values across decrements aswell as ages. Using the methods of Compositional Data Analysis pioneered by Aitchison(1986), death densities are transformed into the real space so that the full range of multivariate statistics can be applied, then back-transformed to positive values so that theunit sum constraint is honoured. The structure of the best-known, single-decrementmortality-rate forecasting model, devised by Lee and Carter (1992), is expressed incompositional form and the results from the two models are compared. The compositional model is extended to a multiple-decrement form and used to forecast mortalityby cause of death for Japan
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
We evaluate conditional predictive densities for U.S. output growth and inflationusing a number of commonly used forecasting models that rely on a large number ofmacroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly used normality assumption fit actual realizationsout-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can improve or deteriorate point forecasts, they might have theopposite effect on higher moments. We find that normality is rejected for most modelsin some dimension according to at least one of the tests we use. Interestingly, however,combinations of predictive densities appear to be correctly approximated by a normaldensity: the simple, equal average when predicting output growth and Bayesian modelaverage when predicting inflation.