19 resultados para Hyperbolic smoothing
em Aston University Research Archive
Resumo:
The performance of feed-forward neural networks in real applications can be often be improved significantly if use is made of a-priori information. For interpolation problems this prior knowledge frequently includes smoothness requirements on the network mapping, and can be imposed by the addition to the error function of suitable regularization terms. The new error function, however, now depends on the derivatives of the network mapping, and so the standard back-propagation algorithm cannot be applied. In this paper, we derive a computationally efficient learning algorithm, for a feed-forward network of arbitrary topology, which can be used to minimize the new error function. Networks having a single hidden layer, for which the learning algorithm simplifies, are treated as a special case.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
We show that net equity payouts from the corporate sector play a crucial role in helping individuals manage their consumption path across the business cycle. In particular, we show that, as investors' desire to smooth consumption increases, optimal aggregate dividends become both more volatile and more counter-cyclical to help counterbalance pro-cyclical labor income. These findings are robust to whether or not agency conflicts exist in the economy.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.
Resumo:
The thrust of this report concerns spline theory and some of the background to spline theory and follows the development in (Wahba, 1991). We also review methods for determining hyper-parameters, such as the smoothing parameter, by Generalised Cross Validation. Splines have an advantage over Gaussian Process based procedures in that we can readily impose atmospherically sensible smoothness constraints and maintain computational efficiency. Vector splines enable us to penalise gradients of vorticity and divergence in wind fields. Two similar techniques are summarised and improvements based on robust error functions and restricted numbers of basis functions given. A final, brief discussion of the application of vector splines to the problem of scatterometer data assimilation highlights the problems of ambiguous solutions.
Resumo:
Spectral and coherence methodologies are ubiquitous for the analysis of multiple time series. Partial coherence analysis may be used to try to determine graphical models for brain functional connectivity. The outcome of such an analysis may be considerably influenced by factors such as the degree of spectral smoothing, line and interference removal, matrix inversion stabilization and the suppression of effects caused by side-lobe leakage, the combination of results from different epochs and people, and multiple hypothesis testing. This paper examines each of these steps in turn and provides a possible path which produces relatively ‘clean’ connectivity plots. In particular we show how spectral matrix diagonal up-weighting can simultaneously stabilize spectral matrix inversion and reduce effects caused by side-lobe leakage, and use the stepdown multiple hypothesis test procedure to help formulate an interaction strength.
Resumo:
Purpose - To provide a framework of accounting policy choice associated with the timing of adoption of the UK Statement of Standard Accounting Practice (SSAP) No. 20, "Foreign Currency Translation". The conceptual framework describes the accounting policy choices that firms face in a setting that is influenced by: their financial characteristics; the flexible foreign exchange rates; and the stock market response to accounting decisions. Design/methodology/approach - Following the positive accounting theory context, this paper puts into a framework the motives and choices of UK firms with regard to the adoption or deferment of the adoption of SSAP 20. The paper utilises the theoretical and empirical findings of previous studies to form and substantiate the conceptual framework. Given the UK foreign exchange setting, the framework identifies the initial stage: lack of regulation and flexibility in financial reporting; the intermediate stage: accounting policy choice; and the final stage: accounting choice and policy review. Findings - There are situations where accounting regulation contrasts with the needs and business objectives of firms and vice-versa. Thus, firms may delay the adoption up to the point where the increase in political costs can just be tolerated. Overall, the study infers that firms might have chosen to defer the adoption of SSAP 20 until they reach a certain corporate goal, or the adverse impact (if any) of the accounting change on firms' financial numbers is minimal. Thus, the determination of the timing of the adoption is a matter which is subject to the objectives of the managers in association with the market and economic conditions. The paper suggests that the flexibility in financial reporting, which may enhance the scope for income-smoothing, can be mitigated by the appropriate standardisation of accounting practice. Research limitations/implications - First, the study encompassed a period when firms and investors were less sophisticated users of financial information. Second, it is difficult to ascertain the decisions that firms would have taken, had the pound appreciated over the period of adoption and had the firms incurred translation losses rather than translation gains. Originality/value - This paper is useful to accounting standards setters, professional accountants, academics and investors. The study can give the accounting standard-setting bodies useful information when they prepare a change in the accounting regulation or set an appropriate date for the implementation of an accounting standard. The paper provides significant insight about the behaviour of firms and the associated impacts of financial markets and regulation on the decision-making process of firms. The framework aims to assist the market and other authorities to reduce information asymmetry and to reinforce the efficiency of the market. © Emerald Group Publishing Limited.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
In this paper, we present a framework for Bayesian inference in continuous-time diffusion processes. The new method is directly related to the recently proposed variational Gaussian Process approximation (VGPA) approach to Bayesian smoothing of partially observed diffusions. By adopting a basis function expansion (BF-VGPA), both the time-dependent control parameters of the approximate GP process and its moment equations are projected onto a lower-dimensional subspace. This allows us both to reduce the computational complexity and to eliminate the time discretisation used in the previous algorithm. The new algorithm is tested on an Ornstein-Uhlenbeck process. Our preliminary results show that BF-VGPA algorithm provides a reasonably accurate state estimation using a small number of basis functions.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
In the present work the neutron emission spectra from a graphite cube, and from natural uranium, lithium fluoride, graphite, lead and steel slabs bombarded with 14.1 MeV neutrons were measured to test nuclear data and calculational methods for D - T fusion reactor neutronics. The neutron spectra measured were performed by an organic scintillator using a pulse shape discrimination technique based on a charge comparison method to reject the gamma rays counts. A computer programme was used to analyse the experimental data by the differentiation unfolding method. The 14.1 MeV neutron source was obtained from T(d,n)4He reaction by the bombardment of T - Ti target with a deuteron beam of energy 130 KeV. The total neutron yield was monitored by the associated particle method using a silicon surface barrier detector. The numerical calculations were performed using the one-dimensional discrete-ordinate neutron transport code ANISN with the ZZ-FEWG 1/ 31-1F cross section library. A computer programme based on Gaussian smoothing function was used to smooth the calculated data and to match the experimental data. There was general agreement between measured and calculated spectra for the range of materials studied. The ANISN calculations carried out with P3 - S8 calculations together with representation of the slab assemblies by a hollow sphere with no reflection at the internal boundary were adequate to model the experimental data and hence it appears that the cross section set is satisfactory and for the materials tested needs no modification in the range 14.1 MeV to 2 MeV. Also it would be possible to carry out a study on fusion reactor blankets, using cylindrical geometry and including a series of concentric cylindrical shells to represent the torus wall, possible neutron converter and breeder regions, and reflector and shielding regions.
Resumo:
An investigation is carried out into the design of a small local computer network for eventual implementation on the University of Aston campus. Microprocessors are investigated as a possible choice for use as a node controller for reasons of cost and reliability. Since the network will be local, high speed lines of megabit order are proposed. After an introduction to several well known networks, various aspects of networks are discussed including packet switching, functions of a node and host-node protocol. Chapter three develops the network philosophy with an introduction to microprocessors. Various organisations of microprocessors into multicomputer and multiprocessor systems are discussed, together with methods of achieving reliabls computing. Chapter four presents the simulation model and its implentation as a computer program. The major modelling effort is to study the behaviour of messages queueing for access to the network and the message delay experienced on the network. Use is made of spectral analysis to determine the sampling frequency while Sxponentially Weighted Noving Averages are used for data smoothing.
Resumo:
Small scale laboratory experiments, in which the specimen is considered to represent an element of soil in the soil mass, are essential to the evolution of fundamental theories of mechanical behaviour. In this thesis, plane strain and axisymmetric compression tests, performed on a fine sand, are reported and the results are compared with various theoretical predictions. A new apparatus is described in which cuboidal samples can be tested in either axisymmetric compression or plane strain. The plane strain condition is simulated either by rigid side platens, in the conventional manner, or by flexible side platens which also measure the intermediate principal stress. Close control of the initial porosity of the specimens is achieved by a vibratory method of sample preparation. The strength of sand is higher in plane strain than in axisymmetric compression, and the strains required to mobilize peak strength are much smaller. The difference between plane strain and axisymmetric compression behaviour is attributed to the restrictions on particle movement enforced by the plane strain condition; this results in an increase in the frictional component of shear strength. The stress conditions at failure in plane strain, including the intermediate principal stress, are accurately predicted by a theory based on the stress- dilatancy interpretation of Mohr's circles. Detailed observations of rupture modes are presented and measured rupture plane inclinations are predicted by the stress-dilatancy theory. Although good correlation with the stress-dilatancy theory is obtained during virgin loading, in both axisymmetric compression and plane strain, the stress-dilatancy rule is only obeyed during reloading if the specimen has been unloaded to approximate ambient stress conditions. The shape of the stress-strain curves during pre-peak deformation, in both plane strain and axisymmetric compression, is accurately described bv a combined parabolic-hyperbolic specification.
Resumo:
The research objectives were:- 1.To review the literature to establish the factors which have traditionally been regarded as most crucial to the design of effectlve exhaust ventilation systems. 2. To design, construct, install and calibrate a wind tunnel. 3. To develop procedures for air velocity measurement followed by a comprehensive programme of aerodvnamic data collection and data analysis for a variety of conditions. The major research findings were:- a) The literature in the subject is inadequate. There is a particular need for a much greater understanding of the aerodynamics of the suction flow field. b) The discrepancies between the experimentally observed centre-line velocities and those predicted by conventional formulae are unacceptably large. c) There was little agreement between theoretically calculated and observed velocities in the suction zone of captor hoods. d) Improved empirical formulae for the prediction of centre-line velocity applicable to the classical geometrically shaped suction openings and the flanged condition could be (and were) derived. Further analysis of data revealed that: - i) Point velocity is directly proportional to the suction. flow rate and the ratio of the point velocity to the average face velocity is constant. ii) Both shape, and size of the suction opening are significant factors as the coordinates of their points govern the extent of the effect of the suction flow field. iii) The hypothetical ellipsoidal potential function and hyperbolic streamlines were found experimentally to be correct. iv) The effect of guide plates depends on the size, shape and the angle of fitting. The effect was to very approximately double the suction velocity but the exact effect is difficult to predict. v) The axially symmetric openings produce practically symmetric flow fields. Similarity of connection pieces between the suction opening and the main duct in each case is essential in order to induce a similar suction flow field. Additionally a pilot study was made in which an artificial extraneous air flow was created, measured and its interaction with the suction flow field measured and represented graphically.