54 resultados para Non-linear Dynamics
Resumo:
We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.
Resumo:
We present measurements on the non-linear temperature response of fibre Bragg gratings recorded in pure and trans-4-stilbenemethanol-doped polymethyl methacrylate (PMMA) holey fibres.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
Resumo:
This study used magnetoencephalography (MEG) to examine the dynamic patterns of neural activity underlying the auditory steady-state response. We examined the continuous time-series of responses to a 32-Hz amplitude modulation. Fluctuations in the amplitude of the evoked response were found to be mediated by non-linear interactions with oscillatory processes both at the same source, in the alpha and beta frequency bands, and in the opposite hemisphere. © 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
This thesis examines the dynamics of firm-level financing and investment decisions for six Southeast Asian countries. The study provides empirical evidence on the impacts of changes in the firm-level financing decisions during the period of financial liberalization by considering the debt and equity financing decisions of a set of non-financial firms. The empirical results show that firms in Indonesia, Pakistan, and South Korea have relatively faster speed of adjustment than other Southeast Asian countries to attain optimal debt and equity ratios in response to banking sector and stock market liberalization. In addition, contrary to widely held belief that firms adjust their financial ratios to industry levels, the results indicate that industry factors do not significantly impact on the speed of capital structure adjustments. This study also shows that non-linear estimation methods are more appropriate than linear estimation methods for capturing changes in capital structure. The empirical results also show that international stock market integration of these countries has significantly reduced the equity risk premium as well as the firm-level cost of equity capital. Thus stock market liberalization is associated with a decrease in the cost of equity capital of the firms. Developments in the securities markets infrastructure have also reduced the cost of equity capital. However, with increased integration there is the possibility of capital outflows from the emerging markets, which might reverse the pattern of decrease in cost of capital in these markets.
Resumo:
Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We (Vision Research 38 (1998) 2108) have previously shown how such distortions in perceived edge blur may be accounted for by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. If the form of the transducer is fixed (independent of contrast) for a given speed, then a strong prediction of the model is that motion sharpening should increase with increasing contrast. We measured the sharpening of periodic patterns over a large range of contrasts, blur widths and speeds. The results indicate that whilst sharpening increases with speed it is practically invariant with contrast. The contrast invariance of motion sharpening is not explained by an early, static compressive non-linearity alone. However, several alternative explanations are also inconsistent with these results. We show that if a dynamic contrast gain control precedes the static non-linear transducer then motion sharpening, its speed dependence, and its invariance with contrast, can be predicted with reasonable accuracy. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
The dynamics of switching and transmission of an optical signal comprising individual OTDM channels of unequal amplitudes in a dispersion-managed link with in-line non-linear fibre loop mirrors is investigated for the first time.
Resumo:
This thesis experimentally examines the use of different techniques for optical fibre transmission over ultra long haul distances. Its format firstly examines the use of dispersion management as a means of achieving long haul communications. Secondly, examining the use concatenated NOLMs for DM autosoliton ultra long haul propagation, by comparing their performance with a generic system without NOLMs. Thirdly, timing jitter in concatenated NOLM system is examined and compared to the generic system and lastly issues of OTDM amplitude non-uniformity from channel to channel in a saturable absorber, specifically a NOLM, are raised. Transmission at a rate of 40Gbit/s is studied in an all-Raman amplified standard fibre link with amplifier spacing of the order of 80km. We demonstrate in this thesis that the detrimental effects associated with high power Raman amplification can be minimized by dispersion map optimization. As a result, a transmission distance of 1600 km (2000km including dispersion compensating fibre) has been achieved in standard single mode fibre. The use of concatenated NOLMs to provide a stable propagation regime has been proposed theoretically. In this thesis, the observation experimentally of autosoliton propagation is shown for the first time in a dispersion managed optical transmission system. The system is based on a strong dispersion map with large amplifier spacing. Operation at transmission rates of 10, 40 and 80Gbit/s is demonstrated. With an insertion of a stabilizing element to the NOLM, the transmission of a 10 and 20Gbit/s data stream was extended and demonstrated experimentally. Error-free propagation over 100 and 20 thousand kilometres has been achieved at 10 and 20Gbit/s respectively, with terrestrial amplifier spacing. The monitor of timing jitter is of importance to all optical systems. Evolution of timing jitter in a DM autosoliton system has been studied in this thesis and analyzed at bit ranges from 10Gbit/s to 80Gbit/s. Non-linear guiding by in-line regenerators considerably changes the dynamics of jitter accumulation. As transmission systems require higher data rates, the use of OTDM will become more prolific. The dynamics of switching and transmission of an optical signal comprising individual OTDM channels of unequal amplitudes in a dispersion-managed link with in-line non-linear fibre loop mirrors is investigated.
Resumo:
This thesis describes an experimental and analytic study of the effects of magnetic non-linearity and finite length on the loss and field distribution in solid iron due to a travelling mmf wave. In the first half of the thesis, a two-dimensional solution is developed which accounts for the effects of both magnetic non-linearity and eddy-current reaction; this solution is extended, in the second half, to a three-dimensional model. In the two-dimensional solution, new equations for loss and flux/pole are given; these equations contain the primary excitation, the machine parameters and factors describing the shape of the normal B-H curve. The solution applies to machines of any air-gap length. The conditions for maximum loss are defined, and generalised torque/frequency curves are obtained. A relationship between the peripheral component of magnetic field on the surface of the iron and the primary excitation is given. The effects of magnetic non-linearity and finite length are combined analytically by introducing an equivalent constant permeability into a linear three-dimensional analysis. The equivalent constant permeability is defined from the non-linear solution for the two-dimensional magnetic field at the axial centre of the machine to avoid iterative solutions. In the linear three-dimensional analysis, the primary excitation in the passive end-regions of the machine is set equal to zero and the secondary end faces are developed onto the air-gap surface. The analyses, and the assumptions on which they are based, were verified on an experimental machine which consists of a three-phase rotor and alternative solid iron stators, one with copper end rings, and one without copper end rings j the main dimensions of the two stators are identical. Measurements of torque, flux /pole, surface current density and radial power flow were obtained for both stators over a range of frequencies and excitations. Comparison of the measurements on the two stators enabled the individual effects of finite length and saturation to be identified, and the definition of constant equivalent permeability to be verified. The penetration of the peripheral flux into the stator with copper end rings was measured and compared with theoretical penetration curves. Agreement between measured and theoretical results was generally good.
Resumo:
The spatial patterns of diffuse, primitive, classic and compact beta-amyloid (Abeta) deposits were studied in the medial temporal lobe in 14 elderly, non-demented patients (ND) and in nine patients with Alzheimer’s disease (AD). In both patient groups, Abeta deposits were clustered and in a number of tissues, a regular periodicity of Abeta deposit clusters was observed parallel to the tissue boundary. The primitive deposit clusters were significantly larger in the AD cases but there were no differences in the sizes of the diffuse and classic deposit clusters between patient groups. In AD, the relationship between Abeta deposit cluster size and density in the tissue was non-linear. This suggested that cluster size increased with increasing Abeta deposit density in some tissues while in others, Abeta deposit density was high but contained within smaller clusters. It was concluded that the formation of large clusters of primitive deposits could be a factor in the development of AD.