49 resultados para Non-Linear Optimization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We review the recent progress of information theory in optical communications, and describe the current experimental results and associated advances in various individual technologies which increase the information capacity. We confirm the widely held belief that the reported capacities are approaching the fundamental limits imposed by signal-to-noise ratio and the distributed non-linearity of conventional optical fibres, resulting in the reduction in the growth rate of communication capacity. We also discuss the techniques which are promising to increase and/or approach the information capacity limit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present measurements on the non-linear temperature response of fibre Bragg gratings recorded in pure and trans-4-stilbenemethanol-doped polymethyl methacrylate (PMMA) holey fibres.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores the use of the optimization procedures in SAS/OR software with application to the ordered weight averaging (OWA) operators of decision-making units (DMUs). OWA was originally introduced by Yager (IEEE Trans Syst Man Cybern 18(1):183-190, 1988) has gained much interest among researchers, hence many applications such as in the areas of decision making, expert systems, data mining, approximate reasoning, fuzzy system and control have been proposed. On the other hand, the SAS is powerful software and it is capable of running various optimization tools such as linear and non-linear programming with all type of constraints. To facilitate the use of OWA operator by SAS users, a code was implemented. The SAS macro developed in this paper selects the criteria and alternatives from a SAS dataset and calculates a set of OWA weights. An example is given to illustrate the features of SAS/OWA software. © Springer-Verlag 2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The non-linear programming algorithms for the minimum weight design of structural frames are presented in this thesis. The first, which is applied to rigidly jointed and pin jointed plane frames subject to deflexion constraints, consists of a search in a feasible design space. Successive trial designs are developed so that the feasibility and the optimality of the designs are improved simultaneously. It is found that this method is restricted lo the design of structures with few unknown variables. The second non-linear programming algorithm is presented .in a general form. This consists of two types of search, one improving feasibility and the other optimality. The method speeds up the 'feasible direction' approach by obtaining a constant weight direction vector that is influenced by dominating constraints. For pin jointed plane and space frames this method is used to obtain a 'minimum weight' design which satisfies restrictions on stresses and deflexions. The matrix force method enables the design requirements to be expressed in a general form and the design problem is automatically formulated within the computer. Examples are given to explain the method and the design criteria are extended to include member buckling. Fundamental theorems are proposed and proved to confirm that structures are inter-related. These theorems are applicable to linear elastic structures and facilitate the prediction of the behaviour of one structure from the results of analysing another, more general, or related structure. It becomes possible to evaluate the significance of each member in the behaviour of a structure and the problem of minimum weight design is extended to include shape. A method is proposed to design structures of optimum shape with stress and deflexion limitations. Finally a detailed investigation is carried out into the design of structures to study the factors that influence their shape.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Linear models reach their limitations in applications with nonlinearities in the data. In this paper new empirical evidence is provided on the relative Euro inflation forecasting performance of linear and non-linear models. The well established and widely used univariate ARIMA and multivariate VAR models are used as linear forecasting models whereas neural networks (NN) are used as non-linear forecasting models. It is endeavoured to keep the level of subjectivity in the NN building process to a minimum in an attempt to exploit the full potentials of the NN. It is also investigated whether the historically poor performance of the theoretically superior measure of the monetary services flow, Divisia, relative to the traditional Simple Sum measure could be attributed to a certain extent to the evaluation of these indices within a linear framework. Results obtained suggest that non-linear models provide better within-sample and out-of-sample forecasts and linear models are simply a subset of them. The Divisia index also outperforms the Simple Sum index when evaluated in a non-linear framework. © 2005 Taylor & Francis Group Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We (Vision Research 38 (1998) 2108) have previously shown how such distortions in perceived edge blur may be accounted for by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. If the form of the transducer is fixed (independent of contrast) for a given speed, then a strong prediction of the model is that motion sharpening should increase with increasing contrast. We measured the sharpening of periodic patterns over a large range of contrasts, blur widths and speeds. The results indicate that whilst sharpening increases with speed it is practically invariant with contrast. The contrast invariance of motion sharpening is not explained by an early, static compressive non-linearity alone. However, several alternative explanations are also inconsistent with these results. We show that if a dynamic contrast gain control precedes the static non-linear transducer then motion sharpening, its speed dependence, and its invariance with contrast, can be predicted with reasonable accuracy. © 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis experimentally examines the use of different techniques for optical fibre transmission over ultra long haul distances. Its format firstly examines the use of dispersion management as a means of achieving long haul communications. Secondly, examining the use concatenated NOLMs for DM autosoliton ultra long haul propagation, by comparing their performance with a generic system without NOLMs. Thirdly, timing jitter in concatenated NOLM system is examined and compared to the generic system and lastly issues of OTDM amplitude non-uniformity from channel to channel in a saturable absorber, specifically a NOLM, are raised. Transmission at a rate of 40Gbit/s is studied in an all-Raman amplified standard fibre link with amplifier spacing of the order of 80km. We demonstrate in this thesis that the detrimental effects associated with high power Raman amplification can be minimized by dispersion map optimization. As a result, a transmission distance of 1600 km (2000km including dispersion compensating fibre) has been achieved in standard single mode fibre. The use of concatenated NOLMs to provide a stable propagation regime has been proposed theoretically. In this thesis, the observation experimentally of autosoliton propagation is shown for the first time in a dispersion managed optical transmission system. The system is based on a strong dispersion map with large amplifier spacing. Operation at transmission rates of 10, 40 and 80Gbit/s is demonstrated. With an insertion of a stabilizing element to the NOLM, the transmission of a 10 and 20Gbit/s data stream was extended and demonstrated experimentally. Error-free propagation over 100 and 20 thousand kilometres has been achieved at 10 and 20Gbit/s respectively, with terrestrial amplifier spacing. The monitor of timing jitter is of importance to all optical systems. Evolution of timing jitter in a DM autosoliton system has been studied in this thesis and analyzed at bit ranges from 10Gbit/s to 80Gbit/s. Non-linear guiding by in-line regenerators considerably changes the dynamics of jitter accumulation. As transmission systems require higher data rates, the use of OTDM will become more prolific. The dynamics of switching and transmission of an optical signal comprising individual OTDM channels of unequal amplitudes in a dispersion-managed link with in-line non-linear fibre loop mirrors is investigated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes an experimental and analytic study of the effects of magnetic non-linearity and finite length on the loss and field distribution in solid iron due to a travelling mmf wave. In the first half of the thesis, a two-dimensional solution is developed which accounts for the effects of both magnetic non-linearity and eddy-current reaction; this solution is extended, in the second half, to a three-dimensional model. In the two-dimensional solution, new equations for loss and flux/pole are given; these equations contain the primary excitation, the machine parameters and factors describing the shape of the normal B-H curve. The solution applies to machines of any air-gap length. The conditions for maximum loss are defined, and generalised torque/frequency curves are obtained. A relationship between the peripheral component of magnetic field on the surface of the iron and the primary excitation is given. The effects of magnetic non-linearity and finite length are combined analytically by introducing an equivalent constant permeability into a linear three-dimensional analysis. The equivalent constant permeability is defined from the non-linear solution for the two-dimensional magnetic field at the axial centre of the machine to avoid iterative solutions. In the linear three-dimensional analysis, the primary excitation in the passive end-regions of the machine is set equal to zero and the secondary end faces are developed onto the air-gap surface. The analyses, and the assumptions on which they are based, were verified on an experimental machine which consists of a three-phase rotor and alternative solid iron stators, one with copper end rings, and one without copper end rings j the main dimensions of the two stators are identical. Measurements of torque, flux /pole, surface current density and radial power flow were obtained for both stators over a range of frequencies and excitations. Comparison of the measurements on the two stators enabled the individual effects of finite length and saturation to be identified, and the definition of constant equivalent permeability to be verified. The penetration of the peripheral flux into the stator with copper end rings was measured and compared with theoretical penetration curves. Agreement between measured and theoretical results was generally good.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The spatial patterns of diffuse, primitive, classic and compact beta-amyloid (Abeta) deposits were studied in the medial temporal lobe in 14 elderly, non-demented patients (ND) and in nine patients with Alzheimer’s disease (AD). In both patient groups, Abeta deposits were clustered and in a number of tissues, a regular periodicity of Abeta deposit clusters was observed parallel to the tissue boundary. The primitive deposit clusters were significantly larger in the AD cases but there were no differences in the sizes of the diffuse and classic deposit clusters between patient groups. In AD, the relationship between Abeta deposit cluster size and density in the tissue was non-linear. This suggested that cluster size increased with increasing Abeta deposit density in some tissues while in others, Abeta deposit density was high but contained within smaller clusters. It was concluded that the formation of large clusters of primitive deposits could be a factor in the development of AD.