14 resultados para Long memory stochastic process

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. Among the nested model specifications that are investigated, in 11 out of 12 cases, daily returns are most appropriately characterized by a variant of the MMAR that applies a multifractal time-deformation process to NIID returns. There is no evidence of long memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of an iterative procedure for the estimation of the parameters of an ARFIMA process are investigated in a Monte Carlo study. The estimation procedure is applied to stock returns data for 15 countries. © 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines whether the observed long memory behavior of log-range series is to some extent spurious and whether it can be explained by the presence of structural breaks. Utilizing stock market data we show that the characterization of log-range series as long memory processes can be a strong assumption. Moreover, we find that all examined series experience a large number of significant breaks. Once the breaks are accounted for, the volatility persistence is eliminated. Overall, the findings suggest that volatility can be adequately represented, at least in-sample, through a multiple breaks process and a short run component.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of statistical tests for hypotheses concerning the parameters of the multifractal model of asset returns (MMAR) are investigated, using Monte Carlo techniques. We show that, in the presence of multifractality, conventional tests of long memory tend to over-reject the null hypothesis of no long memory. Our test addresses this issue by jointly estimating long memory and multifractality. The estimation and test procedures are applied to exchange rate data for 12 currencies. In 11 cases, the exchange rate returns are accurately described by compounding a NIID series with a multifractal time-deformation process. There is no evidence of long memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic anti-resonance, that is resonant enhancement of randomness caused by polarization mode beatings, is analyzed both numerically and analytically on an example of fibre Raman amplifier with randomly varying birefringence. As a result of such anti-resonance, the polarization mode dispersion growth causes an escape of the signal state of polarization from a metastable state corresponding to the pulling of the signal to the pump state of polarization.This phenomenon reveals itself in abrupt growth of gain fluctuations as well as in dropping of Hurst parameter and Kramers length characterizing long memory in a system and noise induced escape from the polarization pulling state. The results based on analytical multiscale averaging technique agree perfectly with the numerical data obtained by direct numerical simulations of underlying stochastic differential equations. This challenging outcome would allow replacing the cumbersome numerical simulations for real-world extra-long high-speed communication systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deformation microstructures in two batches of commercially pure copper (A and B) of allnost similar composition have been studied after rolling reductions from 5% to 95%. X- ray diffraction, optical metallography, scanning electron microscopy in the back-scattered mode, transmission and scanning electron microscopy have been used to examine the deformation microstructure. At low strains (~10 %) the deformation is accommodated by uniform octahedral slip. Microbands that occur as sheet like features usually on the {111} slip planes are formed after 10% reduction. The misorientations between rnicrobonds ond the matrix are usually small (1 - 2° ) and the dislocations within the bands suggest that a single slip system has been operative. The number of microbands increases with strain, they start to cluster and rotate after 60% reduction and, after 90 %, they become almost perfectly aligned with the rolling direction. There were no detectable differences in deformation microstructure between the two materials up to a deformation level of 60% but subsequently, copper B started to develop shear bands which became very profuse by 90% reduction. By contrast, copper A at this stage of deformation developed a smooth laminated structure. This difference in the deformation microstructures has been attributed to traces of unknown impurity in D which inhibit recovery of work hardening. The preferred orientations of both were typical of deformed copper although the presence of shear bands was associated wth a slightly weaker texture. The effects of rolling temperature and grain size on deformation microstructure were also investigated. It was concluded that lowering the rolling temperature or increasing the initial grain size encourages the material to develop shear bands after heavy deformation. Recovery and recrystallization have been studied in both materials during annealing. During recrystallization the growth of new grains showed quite different characteristics in the two cases. Where shear bands were present these acted as nucleation sites and produced a wide spread of recrystallized grain orientations. The resulting annealing textures were very weak. In the absence of shear bands, nucleation occurs by a remarkably long range bulging process which creates the cube orientation and an intensely sharp annealing texture. Cube oriented regions occur in long bands of highly elongated and well recovered cells which contain long range cumulative micorientations. They are transition bands with structural characteristics ideally suited for nucleation of recrystallization. Shear banding inhibits the cube texture both by creating alternative nuclei and by destroying the microstructural features necessary for cube nucleation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presence of observations. The method is applied to two simple problems: the Ornstein-Uhlenbeck process, of which the exact solution is known and can be compared to, and the double-well system, for which standard approaches such as the ensemble Kalman smoother fail to provide a satisfactory result. Experiments show that our variational approximation is viable and that the results are very promising as the variational approximate solution outperforms standard Gaussian process regression for non-Gaussian Markov processes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We report a new concept of biochemical sensor device based on long-period grating structures UV-inscribed in D-fiber. The surrounding-medium refractive index sensitivity of the devices has been enhanced significantly by a hydrofluoric acid etching process. The devices have been used to measure the sugar concentrations showing clearly an encoding relation between the chemical concentration and the grating spectral response, demonstrating their capability for potential biochemical sensing applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.