26 resultados para Asymptotic Variance of Estimate

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A horizontal fluid layer heated from below in the presence of a vertical magnetic field is considered. A simple asymptotic analysis is presented which demonstrates that a convection mode attached to the side walls of the layer sets in at Rayleigh numbers much below those required for the onset of convection in the bulk of the layer. The analysis complements an earlier analysis by Houchens [J. Fluid Mech. 469, 189 (2002)] which derived expressions for the critical Rayleigh number for the onset of convection in a vertical cylinder with an axial magnetic field in the cases of two aspect ratios. © 2008 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is an alternative model of the 1-way ANOVA called the 'random effects' model or ‘nested’ design in which the objective is not to test specific effects but to estimate the degree of variation of a particular measurement and to compare different sources of variation that influence the measurement in space and/or time. The most important statistics from a random effects model are the components of variance which estimate the variance associated with each of the sources of variation influencing a measurement. The nested design is particularly useful in preliminary experiments designed to estimate different sources of variation and in the planning of appropriate sampling strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During 1999 and 2000 a large number of articles appeared in the financial press which argued that the concentration of the FTSE 100 had increased. Many of these reports suggested that stock market volatility in the UK had risen, because the concentration of its stock markets had increased. This study undertakes a comprehensive measurement of stock market concentration using the FTSE 100 index. We find that during 1999, 2000 and 2001 stock market concentration was noticeably higher than at any other time since the index was introduced. When we measure the volatility of the FTSE 100 index we do not find an association between concentration and its volatility. When we examine the variances and covariance’s of the FTSE 100 constituents we find that security volatility appears to be positively related to concentration changes but concentration and the size of security covariances appear to be negatively related. We simulate the variance of four versions of the FTSE 100 index; in each version of the index the weighting structure reflects either an equally weighted index, or one with levels of low, intermediate or high concentration. We find that moving from low to high concentration has very little impact on the volatility of the index. To complete the study we estimate the minimum variance portfolio for the FTSE 100, we then compare concentration levels of this index to those formed on the basis of market weighting. We find that realised FTSE index weightings are higher than for the minimum variance index.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The entorhinal cortex (EC) controls hippocampal input and output, playing major roles in memory and spatial navigation. Different layers of the EC subserve different functions and a number of studies have compared properties of neurones across layers. We have studied synaptic inhibition and excitation in EC neurones, and we have previously compared spontaneous synaptic release of glutamate and GABA using patch clamp recordings of synaptic currents in principal neurones of layers II (L2) and V (L5). Here, we add comparative studies in layer III (L3). Such studies essentially look at neuronal activity from a presynaptic viewpoint. To correlate this with the postsynaptic consequences of spontaneous transmitter release, we have determined global postsynaptic conductances mediated by the two transmitters, using a method to estimate conductances from membrane potential fluctuations. We have previously presented some of this data for L3 and now extend to L2 and L5. Inhibition dominates excitation in all layers but the ratio follows a clear rank order (highest to lowest) of L2>L3>L5. The variance of the background conductances was markedly higher for excitation and inhibition in L2 compared to L3 or L5. We also show that induction of synchronized network epileptiform activity by blockade of GABA inhibition reveals a relative reluctance of L2 to participate in such activity. This was associated with maintenance of a dominant background inhibition in L2, whereas in L3 and L5 the absolute level of inhibition fell below that of excitation, coincident with the appearance of synchronized discharges. Further experiments identified potential roles for competition for bicuculline by ambient GABA at the GABAA receptor, and strychnine-sensitive glycine receptors in residual inhibition in L2. We discuss our results in terms of control of excitability in neuronal subpopulations of EC neurones and what these may suggest for their functional roles. © 2014 Greenhill et al.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyse the dynamics of a number of second order on-line learning algorithms training multi-layer neural networks, using the methods of statistical mechanics. We first consider on-line Newton's method, which is known to provide optimal asymptotic performance. We determine the asymptotic generalization error decay for a soft committee machine, which is shown to compare favourably with the result for standard gradient descent. Matrix momentum provides a practical approximation to this method by allowing an efficient inversion of the Hessian. We consider an idealized matrix momentum algorithm which requires access to the Hessian and find close correspondence with the dynamics of on-line Newton's method. In practice, the Hessian will not be known on-line and we therefore consider matrix momentum using a single example approximation to the Hessian. In this case good asymptotic performance may still be achieved, but the algorithm is now sensitive to parameter choice because of noise in the Hessian estimate. On-line Newton's method is not appropriate during the transient learning phase, since a suboptimal unstable fixed point of the gradient descent dynamics becomes stable for this algorithm. A principled alternative is to use Amari's natural gradient learning algorithm and we show how this method provides a significant reduction in learning time when compared to gradient descent, while retaining the asymptotic performance of on-line Newton's method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyse natural gradient learning in a two-layer feed-forward neural network using a statistical mechanics framework which is appropriate for large input dimension. We find significant improvement over standard gradient descent in both the transient and asymptotic phases of learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On 20 October 1997 the London Stock Exchange introduced a new trading system called SETS. This system was to replace the dealer system SEAQ, which had been in operation since 1986. Using the iterative sum of squares test introduced by Inclan and Tiao (1994), we investigate whether there was a change in the unconditional variance of opening and closing returns, at the time SETS was introduced. We show that for the FTSE-100 stocks traded on SETS, on the days following its introduction, there was a widespread increase in the volatility of both opening and closing returns. However, no synchronous volatility changes were found to be associated with the FTSE-100 index or FTSE-250 stocks. We conclude therefore that the introduction of the SETS trading mechanism caused an increase in noise at the time the system was introduced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growth curves of four common species of crustose lichens, viz., Buellia aethalea (Ach.) Th. Fr., Lecidea tumida Massai., Rhizocarpon geographicum (L.) DC., and Rhizocarpon reductum Th. Fr. were studied at a site in south Gwynedd, north Wales, UK. Radial growth rates (RGR, mm 1.5 yr-1) were greatest in thalli of R. reductum and least in R. geographicum. Variation in RGR between thalli was greater in B. aethalea and L. tumida than in the species of Rhizocarpon. The relationship between growth rate and thallus diameter was not asymptotic; RGR increasing in smaller thalli to a maximum and then declining in larger diameter thalli. A polynomial curve was fitted to the data; the growth curves being fitted best by a second-order (quadratic) curve, the best fit to this model being shown by B. aethalea. A significant linear regression with a negative slope was also fitted to the growth of the larger thalli of each species. The data suggest that the growth curves of the four crustose lichens differ significantly from the asymptotic curves of foliose lichen species. A phase of declining RGR in larger thalli appears to be characteristic of crustose lichens and is consistent with data from lichenometric studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The techniques and insights from two distinct areas of financial economic modelling are combined to provide evidence of the influence of firm size on the volatility of stock portfolio returns. Portfolio returns are characterized by positive serial correlation induced by the varying levels of non-synchronous trading among the component stocks. This serial correlation is greatest for portfolios of small firms. The conditional volatility of stock returns has been shown to be well represented by the GARCH family of statistical processes. Using a GARCH model of the variance of capitalization-based portfolio returns, conditioned on the autocorrelation structure in the conditional mean, striking differences related to firm size are uncovered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crustose species are the slowest growing of all lichens. Their slow growth and longevity, especially of the yellow-green Rhizocarpon group, has made them important for surface-exposure dating (‘lichenometry’). This review considers various aspects of the growth of crustose lichens revealed by direct measurement including: 1) early growth and development, 2) radial growth rates (RGR, mm yr-1), 3) the growth rate-size curve, and 4) the influence of environmental factors. Many crustose species comprise discrete areolae that contain the algal partner growing on the surface of a non-lichenised fungal hypothallus. Recent data suggest that ‘primary’ areolae may develop from free-living algal cells on the substratum while ‘secondary’ areolae develop from zoospores produced within the thallus. In more extreme environments, the RGR of crustose species may be exceptionally slow but considerably faster rates of growth have been recorded under more favourable conditions. The growth curves of crustose lichens with a marginal hypothallus may differ from the ‘asymptotic’ type of curve recorded in foliose and placodioid species and the latter are characterized by a phase of increasing RGR to a maximum and may be followed by a phase of decreasing growth. The decline in RGR in larger thalli may be attributable to a reduction in the efficiency of translocation of carbohydrate to the thallus margin or to an increased allocation of carbon to support mature ‘reproductive’ areolae. Crustose species have a low RGR accompanied by a low demand for nutrients and an increased allocation of carbon for stress resistance; therefore enabling colonization of more extreme environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regression problems are concerned with predicting the values of one or more continuous quantities, given the values of a number of input variables. For virtually every application of regression, however, it is also important to have an indication of the uncertainty in the predictions. Such uncertainties are expressed in terms of the error bars, which specify the standard deviation of the distribution of predictions about the mean. Accurate estimate of error bars is of practical importance especially when safety and reliability is an issue. The Bayesian view of regression leads naturally to two contributions to the error bars. The first arises from the intrinsic noise on the target data, while the second comes from the uncertainty in the values of the model parameters which manifests itself in the finite width of the posterior distribution over the space of these parameters. The Hessian matrix which involves the second derivatives of the error function with respect to the weights is needed for implementing the Bayesian formalism in general and estimating the error bars in particular. A study of different methods for evaluating this matrix is given with special emphasis on the outer product approximation method. The contribution of the uncertainty in model parameters to the error bars is a finite data size effect, which becomes negligible as the number of data points in the training set increases. A study of this contribution is given in relation to the distribution of data in input space. It is shown that the addition of data points to the training set can only reduce the local magnitude of the error bars or leave it unchanged. Using the asymptotic limit of an infinite data set, it is shown that the error bars have an approximate relation to the density of data in input space.