871 resultados para Random Walk Models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

En esta tesis se va a describir y aplicar de forma novedosa la técnica del alisado exponencial multivariante a la predicción a corto plazo, a un día vista, de los precios horarios de la electricidad, un problema que se está estudiando intensivamente en la literatura estadística y económica reciente. Se van a demostrar ciertas propiedades interesantes del alisado exponencial multivariante que permiten reducir el número de parámetros para caracterizar la serie temporal y que al mismo tiempo permiten realizar un análisis dinámico factorial de la serie de precios horarios de la electricidad. En particular, este proceso multivariante de elevada dimensión se estimará descomponiéndolo en un número reducido de procesos univariantes independientes de alisado exponencial caracterizado cada uno por un solo parámetro de suavizado que variará entre cero (proceso de ruido blanco) y uno (paseo aleatorio). Para ello, se utilizará la formulación en el espacio de los estados para la estimación del modelo, ya que ello permite conectar esa secuencia de modelos univariantes más eficientes con el modelo multivariante. De manera novedosa, las relaciones entre los dos modelos se obtienen a partir de un simple tratamiento algebraico sin requerir la aplicación del filtro de Kalman. De este modo, se podrán analizar y poner al descubierto las razones últimas de la dinámica de precios de la electricidad. Por otra parte, la vertiente práctica de esta metodología se pondrá de manifiesto con su aplicación práctica a ciertos mercados eléctricos spot, tales como Omel, Powernext y Nord Pool. En los citados mercados se caracterizará la evolución de los precios horarios y se establecerán sus predicciones comparándolas con las de otras técnicas de predicción. ABSTRACT This thesis describes and applies the multivariate exponential smoothing technique to the day-ahead forecast of the hourly prices of electricity in a whole new way. This problem is being studied intensively in recent statistics and economics literature. It will start by demonstrating some interesting properties of the multivariate exponential smoothing that reduce drastically the number of parameters to characterize the time series and that at the same time allow a dynamic factor analysis of the hourly prices of electricity series. In particular this very complex multivariate process of dimension 24 will be estimated by decomposing a very reduced number of univariate independent of exponentially smoothing processes each characterized by a single smoothing parameter that varies between zero (white noise process) and one (random walk). To this end, the formulation is used in the state space model for the estimation, since this connects the sequence of efficient univariate models to the multivariate model. Through a novel way, relations between the two models are obtained from a simple algebraic treatment without applying the Kalman filter. Thus, we will analyze and expose the ultimate reasons for the dynamics of the electricity price. Moreover, the practical aspect of this methodology will be shown by applying this new technique to certain electricity spot markets such as Omel, Powernext and Nord Pool. In those markets the behavior of prices will be characterized, their predictions will be formulated and the results will be compared with those of other forecasting techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A quantum random walk on the integers exhibits pseudo memory effects, in that its probability distribution after N steps is determined by reshuffling the first N distributions that arise in a classical random walk with the same initial distribution. In a classical walk, entropy increase can be regarded as a consequence of the majorization ordering of successive distributions. The Lorenz curves of successive distributions for a symmetric quantum walk reveal no majorization ordering in general. Nevertheless, entropy can increase, and computer experiments show that it does so on average. Varying the stages at which the quantum coin system is traced out leads to new quantum walks, including a symmetric walk for which majorization ordering is valid but the spreading rate exceeds that of the usual symmetric quantum walk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We use non-parametric procedures to identify breaks in the underlying series of UK household sector money demand functions. Money demand functions are estimated using cointegration techniques and by employing both the Simple Sum and Divisia measures of money. P-star models are also estimated for out-of-sample inflation forecasting. Our findings suggest that the presence of breaks affects both the estimation of cointegrated money demand functions and the inflation forecasts. P-star forecast models based on Divisia measures appear more accurate at longer horizons and the majority of models with fundamentals perform better than a random walk model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short-term interest rates from October 2008. Out-of-sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson-Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium- to longer-term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near-zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson-Siegel models. © 2014 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. ^ Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. ^ Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. ^ Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La vallée du fleuve Saint-Laurent, dans l’est du Canada, est l’une des régions sismiques les plus actives dans l’est de l’Amérique du Nord et est caractérisée par de nombreux tremblements de terre intraplaques. Après la rotation rigide de la plaque tectonique, l’ajustement isostatique glaciaire est de loin la plus grande source de signal géophysique dans l’est du Canada. Les déformations et les vitesses de déformation de la croûte terrestre de cette région ont été étudiées en utilisant plus de 14 ans d’observations (9 ans en moyenne) de 112 stations GPS fonctionnant en continu. Le champ de vitesse a été obtenu à partir de séries temporelles de coordonnées GPS quotidiennes nettoyées en appliquant un modèle combiné utilisant une pondération par moindres carrés. Les vitesses ont été estimées avec des modèles de bruit qui incluent les corrélations temporelles des séries temporelles des coordonnées tridimensionnelles. Le champ de vitesse horizontale montre la rotation antihoraire de la plaque nord-américaine avec une vitesse moyenne de 16,8±0,7 mm/an dans un modèle sans rotation nette (no-net-rotation) par rapport à l’ITRF2008. Le champ de vitesse verticale confirme un soulèvement dû à l’ajustement isostatique glaciaire partout dans l’est du Canada avec un taux maximal de 13,7±1,2 mm/an et un affaissement vers le sud, principalement au nord des États-Unis, avec un taux typique de −1 à −2 mm/an et un taux minimum de −2,7±1,4 mm/an. Le comportement du bruit des séries temporelles des coordonnées GPS tridimensionnelles a été analysé en utilisant une analyse spectrale et la méthode du maximum de vraisemblance pour tester cinq modèles de bruit: loi de puissance; bruit blanc; bruit blanc et bruit de scintillation; bruit blanc et marche aléatoire; bruit blanc, bruit de scintillation et marche aléatoire. Les résultats montrent que la combinaison bruit blanc et bruit de scintillation est le meilleur modèle pour décrire la partie stochastique des séries temporelles. Les amplitudes de tous les modèles de bruit sont plus faibles dans la direction nord et plus grandes dans la direction verticale. Les amplitudes du bruit blanc sont à peu près égales à travers la zone d’étude et sont donc surpassées, dans toutes les directions, par le bruit de scintillation et de marche aléatoire. Le modèle de bruit de scintillation augmente l’incertitude des vitesses estimées par un facteur de 5 à 38 par rapport au modèle de bruit blanc. Les vitesses estimées de tous les modèles de bruit sont statistiquement cohérentes. Les paramètres estimés du pôle eulérien de rotation pour cette région sont légèrement, mais significativement, différents de la rotation globale de la plaque nord-américaine. Cette différence reflète potentiellement les contraintes locales dans cette région sismique et les contraintes causées par la différence des vitesses intraplaques entre les deux rives du fleuve Saint-Laurent. La déformation de la croûte terrestre de la région a été étudiée en utilisant la méthode de collocation par moindres carrés. Les vitesses horizontales interpolées montrent un mouvement cohérent spatialement: soit un mouvement radial vers l’extérieur pour les centres de soulèvement maximal au nord et un mouvement radial vers l’intérieur pour les centres d’affaissement maximal au sud, avec une vitesse typique de 1 à 1,6±0,4 mm/an. Cependant, ce modèle devient plus complexe près des marges des anciennes zones glaciaires. Basées selon leurs directions, les vitesses horizontales intraplaques peuvent être divisées en trois zones distinctes. Cela confirme les conclusions d’autres chercheurs sur l’existence de trois dômes de glace dans la région d’étude avant le dernier maximum glaciaire. Une corrélation spatiale est observée entre les zones de vitesses horizontales intraplaques de magnitude plus élevée et les zones sismiques le long du fleuve Saint-Laurent. Les vitesses verticales ont ensuite été interpolées pour modéliser la déformation verticale. Le modèle montre un taux de soulèvement maximal de 15,6 mm/an au sud-est de la baie d’Hudson et un taux d’affaissement typique de 1 à 2 mm/an au sud, principalement dans le nord des États-Unis. Le long du fleuve Saint-Laurent, les mouvements horizontaux et verticaux sont cohérents spatialement. Il y a un déplacement vers le sud-est d’une magnitude d’environ 1,3 mm/an et un soulèvement moyen de 3,1 mm/an par rapport à la plaque l’Amérique du Nord. Le taux de déformation verticale est d’environ 2,4 fois plus grand que le taux de déformation horizontale intraplaque. Les résultats de l’analyse de déformation montrent l’état actuel de déformation dans l’est du Canada sous la forme d’une expansion dans la partie nord (la zone se soulève) et d’une compression dans la partie sud (la zone s’affaisse). Les taux de rotation sont en moyenne de 0,011°/Ma. Nous avons observé une compression NNO-SSE avec un taux de 3.6 à 8.1 nstrain/an dans la zone sismique du Bas-Saint-Laurent. Dans la zone sismique de Charlevoix, une expansion avec un taux de 3,0 à 7,1 nstrain/an est orientée ENE-OSO. Dans la zone sismique de l’Ouest du Québec, la déformation a un mécanisme de cisaillement avec un taux de compression de 1,0 à 5,1 nstrain/an et un taux d’expansion de 1.6 à 4.1 nstrain/an. Ces mesures sont conformes, au premier ordre, avec les modèles d’ajustement isostatique glaciaire et avec la contrainte de compression horizontale maximale du projet World Stress Map, obtenue à partir de la théorie des mécanismes focaux (focal mechanism method).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents quantitative studies of T cell and dendritic cell (DC) behaviour in mouse lymph nodes (LNs) in the naive state and following immunisation. These processes are of importance and interest in basic immunology, and better understanding could improve both diagnostic capacity and therapeutic manipulations, potentially helping in producing more effective vaccines or developing treatments for autoimmune diseases. The problem is also interesting conceptually as it is relevant to other fields where 3D movement of objects is tracked with a discrete scanning interval. A general immunology introduction is presented in chapter 1. In chapter 2, I apply quantitative methods to multi-photon imaging data to measure how T cells and DCs are spatially arranged in LNs. This has been previously studied to describe differences between the naive and immunised state and as an indicator of the magnitude of the immune response in LNs, but previous analyses have been generally descriptive. The quantitative analysis shows that some of the previous conclusions may have been premature. In chapter 3, I use Bayesian state-space models to test some hypotheses about the mode of T cell search for DCs. A two-state mode of movement where T cells can be classified as either interacting to a DC or freely migrating is supported over a model where T cells would home in on DCs at distance through for example the action of chemokines. In chapter 4, I study whether T cell migration is linked to the geometric structure of the fibroblast reticular network (FRC). I find support for the hypothesis that the movement is constrained to the fibroblast reticular cell (FRC) network over an alternative 'random walk with persistence time' model where cells would move randomly, with a short-term persistence driven by a hypothetical T cell intrinsic 'clock'. I also present unexpected results on the FRC network geometry. Finally, a quantitative method is presented for addressing some measurement biases inherent to multi-photon imaging. In all three chapters, novel findings are made, and the methods developed have the potential for further use to address important problems in the field. In chapter 5, I present a summary and synthesis of results from chapters 3-4 and a more speculative discussion of these results and potential future directions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.