43 resultados para Sign Data LMS algorithm.
em Aston University Research Archive
Resumo:
We propose an artificial neural network (ANN) equalizer for transmission performance enhancement of coherent optical OFDM (C-OOFDM) signals. The ANN equalizer showed more efficiency in combating both chromatic dispersion (CD) and single-mode fibre (SMF)-induced non-linearities compared to the least mean square (LMS). The equalizer can offer a 1.5 dB improvement in optical signal-to-noise ratio (OSNR) compared to LMS algorithm for 40 Gbit/s C-OOFDM signals when considering only CD. It is also revealed that ANN can double the transmission distance up to 320 km of SMF compared to the case of LMS, providing a nonlinearity tolerance improvement of ∼0.7 dB OSNR.
Resumo:
Switched mode power supplies (SMPSs) are essential components in many applications, and electromagnetic interference is an important consideration in the SMPS design. Spread spectrum based PWM strategies have been used in SMPS designs to reduce the switching harmonics. This paper proposes a novel method to integrate a communication function into spread spectrum based PWM strategy without extra hardware costs. Direct sequence spread spectrum (DSSS) and phase shift keying (PSK) data modulation are employed to the PWM of the SMPS, so that it has reduced switching harmonics and the input and output power line voltage ripples contain data. A data demodulation algorithm has been developed for receivers, and code division multiple access (CDMA) concept is employed as communication method for a system with multiple SMPSs. The proposed method has been implemented in both Buck and Boost converters. The experimental results validated the proposed DSSS based PWM strategy for both harmonic reduction and communication.
Resumo:
We propose a novel template matching approach for the discrimination of handwritten and machine-printed text. We first pre-process the scanned document images by performing denoising, circles/lines exclusion and word-block level segmentation. We then align and match characters in a flexible sized gallery with the segmented regions, using parallelised normalised cross-correlation. The experimental results over the Pattern Recognition & Image Analysis Research Lab-Natural History Museum (PRImA-NHM) dataset show remarkably high robustness of the algorithm in classifying cluttered, occluded and noisy samples, in addition to those with significant high missing data. The algorithm, which gives 84.0% classification rate with false positive rate 0.16 over the dataset, does not require training samples and generates compelling results as opposed to the training-based approaches, which have used the same benchmark.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
This paper reviews some basic issues and methods involved in using neural networks to respond in a desired fashion to a temporally-varying environment. Some popular network models and training methods are introduced. A speech recognition example is then used to illustrate the central difficulty of temporal data processing: learning to notice and remember relevant contextual information. Feedforward network methods are applicable to cases where this problem is not severe. The application of these methods are explained and applications are discussed in the areas of pure mathematics, chemical and physical systems, and economic systems. A more powerful but less practical algorithm for temporal problems, the moving targets algorithm, is sketched and discussed. For completeness, a few remarks are made on reinforcement learning.
Resumo:
The 'moving targets' algorithm for training recurrent networks is reviewed and applied to a task which demonstrates the ability of this algorithm to use distant contextual information. Some practical difficulties are discussed, especially with regard to the minimization process. Results on performance and computational requirements of several different 2nd-order minimization algorithms are presented for moving target problems.
Resumo:
Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.
Resumo:
The Generative Topographic Mapping (GTM) algorithm of Bishop et al. (1997) has been introduced as a principled alternative to the Self-Organizing Map (SOM). As well as avoiding a number of deficiencies in the SOM, the GTM algorithm has the key property that the smoothness properties of the model are decoupled from the reference vectors, and are described by a continuous mapping from a lower-dimensional latent space into the data space. Magnification factors, which are approximated by the difference between code-book vectors in SOMs, can therefore be evaluated for the GTM model as continuous functions of the latent variables using the techniques of differential geometry. They play an important role in data visualization by highlighting the boundaries between data clusters, and are illustrated here for both a toy data set, and a problem involving the identification of crab species from morphological data.
Resumo:
We analyse how the Generative Topographic Mapping (GTM) can be modified to cope with missing values in the training data. Our approach is based on an Expectation -Maximisation (EM) method which estimates the parameters of the mixture components and at the same time deals with the missing values. We incorporate this algorithm into a hierarchical GTM. We verify the method on a toy data set (using a single GTM) and a realistic data set (using a hierarchical GTM). The results show our algorithm can help to construct informative visualisation plots, even when some of the training points are corrupted with missing values.
Resumo:
Hierarchical visualization systems are desirable because a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex high-dimensional data sets. We extend an existing locally linear hierarchical visualization system PhiVis [1] in several directions: bf(1) we allow for em non-linear projection manifolds (the basic building block is the Generative Topographic Mapping -- GTM), bf(2) we introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree, bf(3) we describe folding patterns of low-dimensional projection manifold in high-dimensional data space by computing and visualizing the manifold's local directional curvatures. Quantities such as magnification factors [3] and directional curvatures are helpful for understanding the layout of the nonlinear projection manifold in the data space and for further refinement of the hierarchical visualization plot. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. We demonstrate the visualization system principle of the approach on a complex 12-dimensional data set and mention possible applications in the pharmaceutical industry.
Resumo:
We propose a generative topographic mapping (GTM) based data visualization with simultaneous feature selection (GTM-FS) approach which not only provides a better visualization by modeling irrelevant features ("noise") using a separate shared distribution but also gives a saliency value for each feature which helps the user to assess their significance. This technical report presents a varient of the Expectation-Maximization (EM) algorithm for GTM-FS.
Resumo:
We report the formation and structural properties of co-crystals containing gemfibrozil and hydroxy derivatives of t-butylamine H2NC(CH3)3-n(CH2OH)n, with n=0, 1, 2 and 3. In each case, a 1:1 co-crystal is formed, with transfer of a proton from the carboxylic acid group of gemfibrozil to the amino group of the t-butylamine derivative. All of the co-crystal materials prepared are polycrystalline powders, and do not contain single crystals of suitable size and/or quality for single crystal X-ray diffraction studies. Structure determination of these materials has been carried out directly from powder X-ray diffraction data, using the direct-space Genetic Algorithm technique for structure solution followed by Rietveld refinement. The structural chemistry of this series of co-crystal materials reveals well-defined structural trends within the first three members of the family (n=0, 1, 2), but significantly contrasting structural properties for the member with n=3. © 2007 Elsevier Inc. All rights reserved.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.