43 resultados para non-Gaussian volatility sequences
Resumo:
Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.
Resumo:
We apply well known nonlinear diffraction theory governing focusing of a powerful light beam of arbitrary shape in medium with Kerr nonlinearity to the analysis of femtosecond (fs) laser processing of dielectric in sub-critical (input power less than the critical power of selffocusing) regime. Simple analytical expressions are derived for the input beam power and spatial focusing parameter (numerical aperture) that are required for achieving an inscription threshold. Application of non-Gaussian laser beams for better controlled fs inscription at higher powers is also discussed. © 2007 Optical Society of America.
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analyzing network failures caused by hardware faults or overload, where the network reaction was modeled as rerouting of traffic away from failed or congested elements. Here we model another type of the network reaction to congestion - a sharp reduction of the input traffic rate through congested routes which occurs on much shorter time scales. We consider the onset of congestion in the Internet where local mismatch between demand and capacity results in traffic losses and show that it can be described as a phase transition characterized by strong non-Gaussian loss fluctuations at a mesoscopic time scale. The fluctuations, caused by noise in input traffic, are exacerbated by the heterogeneous nature of the network manifested in a scale-free load distribution. They result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © Copyright EPLA, 2012.
Resumo:
Recent advances in our ability to watch the molecular and cellular processes of life in action-such as atomic force microscopy, optical tweezers and Forster fluorescence resonance energy transfer-raise challenges for digital signal processing (DSP) of the resulting experimental data. This article explores the unique properties of such biophysical time series that set them apart from other signals, such as the prevalence of abrupt jumps and steps, multi-modal distributions and autocorrelated noise. It exposes the problems with classical linear DSP algorithms applied to this kind of data, and describes new nonlinear and non-Gaussian algorithms that are able to extract information that is of direct relevance to biological physicists. It is argued that these new methods applied in this context typify the nascent field of biophysical DSP. Practical experimental examples are supplied.
Resumo:
We derive rigorously the Fokker-Planck equation that governs the statistics of soliton parameters in optical transmission lines in the presence of additive amplifier spontaneous emission. We demonstrate that these statistics are generally non-Gaussian. We present exact marginal probability-density functions for soliton parameters for some cases. A WKB approach is applied to describe the tails of the probability-density functions. © 2005 Optical Society of America.
Resumo:
For the first time we report full numerical NLSE-based modeling of generation properties of random distributed feedback fiber laser based on Rayleigh scattering. The model which takes into account the random backscattering via its average strength only describes well power and spectral properties of random DFB fiber lasers. The influence of dispersion and nonlinearity on spectral and statistical properties is investigated. The evidence of non-gaussian intensity statistics is found. © 2013 Optical Society of America.
Resumo:
We apply well known nonlinear diffraction theory governing focusing of a powerful light beam of arbitrary shape in medium with Kerr nonlinearity to the analysis of femtosecond (fs) laser processing of dielectric in sub-critical (input power less than the critical power of selffocusing) regime. Simple analytical expressions are derived for the input beam power and spatial focusing parameter (numerical aperture) that are required for achieving an inscription threshold. Application of non-Gaussian laser beams for better controlled fs inscription at higher powers is also discussed. © 2007 Optical Society of America.
Resumo:
In nonlinear and stochastic control problems, learning an efficient feed-forward controller is not amenable to conventional neurocontrol methods. For these approaches, estimating and then incorporating uncertainty in the controller and feed-forward models can produce more robust control results. Here, we introduce a novel inversion-based neurocontroller for solving control problems involving uncertain nonlinear systems which could also compensate for multi-valued systems. The approach uses recent developments in neural networks, especially in the context of modelling statistical distributions, which are applied to forward and inverse plant models. Provided that certain conditions are met, an estimate of the intrinsic uncertainty for the outputs of neural networks can be obtained using the statistical properties of networks. More generally, multicomponent distributions can be modelled by the mixture density network. Based on importance sampling from these distributions a novel robust inverse control approach is obtained. This importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider. A nonlinear multi-variable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non-Gaussian distributions of control signal as well as processes with hysteresis. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
A closed-form expression for a lower bound on the per soliton capacity of the nonlinear optical fibre channel in the presence of (optical) amplifier spontaneous emission (ASE) noise is derived. This bound is based on a non-Gaussian conditional probability density function for the soliton amplitude jitter induced by the ASE noise and is proven to grow logarithmically as the signal-to-noise ratio increases.
Resumo:
This report outlines the derivation and application of a non-zero mean, polynomial-exponential covariance function based Gaussian process which forms the prior wind field model used in 'autonomous' disambiguation. It is principally used since the non-zero mean permits the computation of realistic local wind vector prior probabilities which are required when applying the scaled-likelihood trick, as the marginals of the full wind field prior. As the full prior is multi-variate normal, these marginals are very simple to compute.
Resumo:
The present thesis tested the hypothesis of Stanovich, Siegel, & Gottardo (1997) that surface dyslexia is the result of a milder phonological deficit than that seen in phonological dyslexia coupled with reduced reading experience. We found that a group of adults with surface dyslexia showed a phonological deficit that was commensurate with that shown by a group of adults with phonological dyslexia (matched for chronological age and verbal and non-verbal IQ) and normal reading experience. We also showed that surface dyslexia cannot be accounted for by a semantic impairment or a deficit in the verbal learning and recall of lexical-semantic information (such as meaningful words), as both dyslexic subgroups performed the same. This study has replicated the results of our published study that surface dyslexia is not the consequence of a mild retardation or reduced learning opportunities but a separate impairment linked to a deficit in written lexical learning, an ability needed to create novel lexical representations from a series of unrelated visual units, which is independent from the phonological deficit (Romani, Di Betta, Tsouknida & Olson, 2008). This thesis also provided evidence that a selective nonword reading deficit in developmental dyslexia persists beyond poor phonology. This was shown by finding a nonword reading deficit even in the presence of normal regularity effects in the dyslexics (when compared to both reading and spelling-age matched controls). A nonword reading deficit was also found in the surface dyslexics. Crucially, this deficit was as strong as in the phonological dyslexics despite better functioning of the sublexical route for the former. These results suggest that a nonword reading deficit cannot be solely explained by a phonological impairment. We, thus, suggested that nonword reading should also involve another ability relating to the processing of novel visual orthographic strings, which we called 'orthographic coding'. We then investigated the ability to process series of independent units within multi-element visual arrays and its relationship with reading and spelling problems. We identified a deficit in encoding the order of visual sequences (involving both linguistic and nonlinguistic information) which was significantly associated with word and nonword processing. More importantly, we revealed significant contributions to orthographic skills in both dyslexic and control individuals, even after age, performance IQ and phonological skills were controlled. These results suggest that spelling and reading do not only tap phonological skills but also order encoding skills.
Resumo:
Rotation invariance is important for an iris recognition system since changes of head orientation and binocular vergence may cause eye rotation. The conventional methods of iris recognition cannot achieve true rotation invariance. They only achieve approximate rotation invariance by rotating the feature vector before matching or unwrapping the iris ring at different initial angles. In these methods, the complexity of the method is increased, and when the rotation scale is beyond the certain scope, the error rates of these methods may substantially increase. In order to solve this problem, a new rotation invariant approach for iris feature extraction based on the non-separable wavelet is proposed in this paper. Firstly, a bank of non-separable orthogonal wavelet filters is used to capture characteristics of the iris. Secondly, a method of Markov random fields is used to capture rotation invariant iris feature. Finally, two-class kernel Fisher classifiers are adopted for classification. Experimental results on public iris databases show that the proposed approach has a low error rate and achieves true rotation invariance. © 2010.
Resumo:
Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Gaussian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed.