11 resultados para fractal sets
em Aston University Research Archive
Resumo:
The dynamics of supervised learning in layered neural networks were studied in the regime where the size of the training set is proportional to the number of inputs. The evolution of macroscopic observables, including the two relevant performance measures can be predicted by using the dynamical replica theory. Three approximation schemes aimed at eliminating the need to solve a functional saddle-point equation at each time step have been derived.
Resumo:
We study the dynamics of on-line learning in multilayer neural networks where training examples are sampled with repetition and where the number of examples scales with the number of network weights. The analysis is carried out using the dynamical replica method aimed at obtaining a closed set of coupled equations for a set of macroscopic variables from which both training and generalization errors can be calculated. We focus on scenarios whereby training examples are corrupted by additive Gaussian output noise and regularizers are introduced to improve the network performance. The dependence of the dynamics on the noise level, with and without regularizers, is examined, as well as that of the asymptotic values obtained for both training and generalization errors. We also demonstrate the ability of the method to approximate the learning dynamics in structurally unrealizable scenarios. The theoretical results show good agreement with those obtained by computer simulations.
Resumo:
On 20 October 1997 the London Stock Exchange introduced a new trading system called SETS. This system was to replace the dealer system SEAQ, which had been in operation since 1986. Using the iterative sum of squares test introduced by Inclan and Tiao (1994), we investigate whether there was a change in the unconditional variance of opening and closing returns, at the time SETS was introduced. We show that for the FTSE-100 stocks traded on SETS, on the days following its introduction, there was a widespread increase in the volatility of both opening and closing returns. However, no synchronous volatility changes were found to be associated with the FTSE-100 index or FTSE-250 stocks. We conclude therefore that the introduction of the SETS trading mechanism caused an increase in noise at the time the system was introduced.
Resumo:
This paper introduces a new technique in the investigation of limited-dependent variable models. This paper illustrates that variable precision rough set theory (VPRS), allied with the use of a modern method of classification, or discretisation of data, can out-perform the more standard approaches that are employed in economics, such as a probit model. These approaches and certain inductive decision tree methods are compared (through a Monte Carlo simulation approach) in the analysis of the decisions reached by the UK Monopolies and Mergers Committee. We show that, particularly in small samples, the VPRS model can improve on more traditional models, both in-sample, and particularly in out-of-sample prediction. A similar improvement in out-of-sample prediction over the decision tree methods is also shown.
Resumo:
We have measured the frequency dependence of the conductivity and the dielectric constant of various samples of porous Si in the regime 1 Hz-100 kHz at different temperatures. The conductivity data exhibit a strong frequency dependence. When normalized to the dc conductivity, our data obey a universal scaling law, with a well-defined crossover, in which the real part of the conductivity sigma' changes from an sqrt(omega) dependence to being proportional to omega. We explain this in terms of activated hopping in a fractal network. The low-frequency regime is governed by the fractal properties of porous Si, whereas the high-frequency dispersion comes from a broad distribution of activation energies. Calculations using the effective-medium approximation for activated hopping on a percolating lattice give fair agreement with the data.
Resumo:
This paper is a progress report on a research path I first outlined in my contribution to “Words in Context: A Tribute to John Sinclair on his Retirement” (Heffer and Sauntson, 2000). Therefore, I first summarize that paper here, in order to provide the relevant background. The second half of the current paper consists of some further manual analyses, exploring various parameters and procedures that might assist in the design of an automated computational process for the identification of lexical sets. The automation itself is beyond the scope of the current paper.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.
Resumo:
We report an empirical analysis of long-range dependence in the returns of eight stock market indices, using the Rescaled Range Analysis (RRA) to estimate the Hurst exponent. Monte Carlo and bootstrap simulations are used to construct critical values for the null hypothesis of no long-range dependence. The issue of disentangling short-range and long-range dependence is examined. Pre-filtering by fitting a (short-range) autoregressive model eliminates part of the long-range dependence when the latter is present, while failure to pre-filter leaves open the possibility of conflating short-range and long-range dependence. There is a strong evidence of long-range dependence for the small central European Czech stock market index PX-glob, and a weaker evidence for two smaller western European stock market indices, MSE (Spain) and SWX (Switzerland). There is little or no evidence of long-range dependence for the other five indices, including those with the largest capitalizations among those considered, DJIA (US) and FTSE350 (UK). These results are generally consistent with prior expectations concerning the relative efficiency of the stock markets examined. © 2011 Elsevier Inc.
Resumo:
Purpose: The human retinal vasculature has been demonstrated to exhibit fractal, or statistically self similar properties. Fractal analysis offers a simple quantitative method to characterise the complexity of the branching vessel network in the retina. Several methods have been proposed to quantify the fractal properties of the retina. Methods: Twenty five healthy volunteers underwent retinal photography, retinal oximetry and ocular biometry. A robust method to evaluate the fractal properties of the retinal vessels is proposed; it consists of manual vessel segmentation and box counting of 50 degree retinal photographs centred on the fovea. Results: Data is presented on the associations between the fractal properties of the retinal vessels and various functional properties of the retina. Conclusion Fractal properties of the retina could offer a promising tool to assess the risk and prognostic factors that define retinal disease. Outstanding efforts surround the need to adopt a standardised protocol for assessing the fractal properties of the retina, and further demonstrate its association with disease processes.