27 resultados para Left-Continuous Random Walk
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.
Resumo:
Tests for random walk behaviour in the Italian stock market are presented, based on an investigation of the fractal properties of the log return series for the Mibtel index. The random walk hypothesis is evaluated against alternatives accommodating either unifractality or multifractality. Critical values for the test statistics are generated using Monte Carlo simulations of random Gaussian innovations. Evidence is reported of multifractality, and the departure from random walk behaviour is statistically significant on standard criteria. The observed pattern is attributed primarily to fat tails in the return probability distribution, associated with volatility clustering in returns measured over various time scales. © 2009 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we investigate the use of manifold learning techniques to enhance the separation properties of standard graph kernels. The idea stems from the observation that when we perform multidimensional scaling on the distance matrices extracted from the kernels, the resulting data tends to be clustered along a curve that wraps around the embedding space, a behavior that suggests that long range distances are not estimated accurately, resulting in an increased curvature of the embedding space. Hence, we propose to use a number of manifold learning techniques to compute a low-dimensional embedding of the graphs in an attempt to unfold the embedding manifold, and increase the class separation. We perform an extensive experimental evaluation on a number of standard graph datasets using the shortest-path (Borgwardt and Kriegel, 2005), graphlet (Shervashidze et al., 2009), random walk (Kashima et al., 2003) and Weisfeiler-Lehman (Shervashidze et al., 2011) kernels. We observe the most significant improvement in the case of the graphlet kernel, which fits with the observation that neglecting the locational information of the substructures leads to a stronger curvature of the embedding manifold. On the other hand, the Weisfeiler-Lehman kernel partially mitigates the locality problem by using the node labels information, and thus does not clearly benefit from the manifold learning. Interestingly, our experiments also show that the unfolding of the space seems to reduce the performance gap between the examined kernels.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we use the quantum Jensen-Shannon divergence as a means to establish the similarity between a pair of graphs and to develop a novel graph kernel. In quantum theory, the quantum Jensen-Shannon divergence is defined as a distance measure between quantum states. In order to compute the quantum Jensen-Shannon divergence between a pair of graphs, we first need to associate a density operator with each of them. Hence, we decide to simulate the evolution of a continuous-time quantum walk on each graph and we propose a way to associate a suitable quantum state with it. With the density operator of this quantum state to hand, the graph kernel is defined as a function of the quantum Jensen-Shannon divergence between the graph density operators. We evaluate the performance of our kernel on several standard graph datasets from bioinformatics. We use the Principle Component Analysis (PCA) on the kernel matrix to embed the graphs into a feature space for classification. The experimental results demonstrate the effectiveness of the proposed approach. © 2013 Springer-Verlag.
Resumo:
Kernel methods provide a way to apply a wide range of learning techniques to complex and structured data by shifting the representational problem from one of finding an embedding of the data to that of defining a positive semidefinite kernel. In this paper, we propose a novel kernel on unattributed graphs where the structure is characterized through the evolution of a continuous-time quantum walk. More precisely, given a pair of graphs, we create a derived structure whose degree of symmetry is maximum when the original graphs are isomorphic. With this new graph to hand, we compute the density operators of the quantum systems representing the evolutions of two suitably defined quantum walks. Finally, we define the kernel between the two original graphs as the quantum Jensen-Shannon divergence between these two density operators. The experimental evaluation shows the effectiveness of the proposed approach. © 2013 Springer-Verlag.
Resumo:
This report seeks to make concrete some of the ideas we have been discussing about sensible priors for winds over the ocean. In particular, random field models are reviewed, as are permissible covariance functions. The criteria which these covariance functions must satisfy in order that vorticity and divergence exist and are continuous are defined. The use of Helmholtz theorem is discussed, and possible choices for the covariances are suggested.
Resumo:
The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with "negative absorption" of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors-random distributed feedback fibre laser-was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100km. Although an effective reflection due to the Rayleigh scattering is extremely small (~0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the generation of a stationary near-Gaussian beam with a narrow spectrum. A random distributed feedback fibre laser has efficiency and performance that are comparable to and even exceed those of similar conventional fibre lasers. The key features of the generated radiation of random distributed feedback fibre lasers include: a stationary narrow-band continuous modeless spectrum that is free of mode competition, nonlinear power broadening, and an output beam with a Gaussian profile in the fundamental transverse mode (generated both in single mode and multi-mode fibres).This review presents the current status of research in the field of random fibre lasers and shows their potential and perspectives. We start with an introductory overview of conventional distributed feedback lasers and traditional random lasers to set the stage for discussion of random fibre lasers. We then present a theoretical analysis and experimental studies of various random fibre laser configurations, including widely tunable, multi-wavelength, narrow-band generation, and random fibre lasers operating in different spectral bands in the 1-1.6μm range. Then we discuss existing and future applications of random fibre lasers, including telecommunication and distributed long reach sensor systems. A theoretical description of random lasers is very challenging and is strongly linked with the theory of disordered systems and kinetic theory. We outline two key models governing the generation of random fibre lasers: the average power balance model and the nonlinear Schrödinger equation based model. Recently invented random distributed feedback fibre lasers represent a new and exciting field of research that brings together such diverse areas of science as laser physics, the theory of disordered systems, fibre optics and nonlinear science. Stable random generation in optical fibre opens up new possibilities for research on wave transport and localization in disordered media. We hope that this review will provide background information for research in various fields and will stimulate cross-disciplinary collaborations on random fibre lasers. © 2014 Elsevier B.V.
Resumo:
The study of complex networks has recently attracted increasing interest because of the large variety of systems that can be modeled using graphs. A fundamental operation in the analysis of complex networks is that of measuring the centrality of a vertex. In this paper, we propose to measure vertex centrality using a continuous-time quantum walk. More specifically, we relate the importance of a vertex to the influence that its initial phase has on the interference patterns that emerge during the quantum walk evolution. To this end, we make use of the quantum Jensen-Shannon divergence between two suitably defined quantum states. We investigate how the importance varies as we change the initial state of the walk and the Hamiltonian of the system. We find that, for a suitable combination of the two, the importance of a vertex is almost linearly correlated with its degree. Finally, we evaluate the proposed measure on two commonly used networks. © 2014 Springer-Verlag Berlin Heidelberg.
Resumo:
The analysis of complex networks is usually based on key properties such as small-worldness and vertex degree distribution. The presence of symmetric motifs on the other hand has been related to redundancy and thus robustness of the networks. In this paper we propose a method for detecting approximate axial symmetries in networks. For each pair of nodes, we define a continuous-time quantum walk which is evolved through time. By measuring the probability that the quantum walker to visits each node of the network in this time frame, we are able to determine whether the two vertices are symmetrical with respect to any axis of the graph. Moreover, we show that we are able to successfully detect approximate axial symmetries too. We show the efficacy of our approach by analysing both synthetic and real-world data. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
A number of recent studies have investigated the introduction of decoherence in quantum walks and the resulting transition to classical random walks. Interestingly,it has been shown that algorithmic properties of quantum walks with decoherence such as the spreading rate are sometimes better than their purely quantum counterparts. Not only quantum walks with decoherence provide a generalization of quantum walks that naturally encompasses both the quantum and classical case, but they also give rise to new and different probability distribution. The application of quantum walks with decoherence to large graphs is limited by the necessity of evolving state vector whose sizes quadratic in the number of nodes of the graph, as opposed to the linear state vector of the purely quantum (or classical) case. In this technical report,we show how to use perturbation theory to reduce the computational complexity of evolving a continuous-time quantum walk subject to decoherence. More specifically, given a graph over n nodes, we show how to approximate the eigendecomposition of the n2×n2 Lindblad super-operator from the eigendecomposition of the n×n graph Hamiltonian.