50 resultados para least mean-square methods

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a comparative analysis on three carrier phase extraction approaches, including a one-tap normalized least mean square method, a block-average method, and a Viterbi-Viterbi method, in coherent transmission system considering equalization enhanced phase noise. © OSA 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the Application of TAP mean field methods known from Statistical Mechanics of disordered systems to Bayesian classification with Gaussian processes. In contrast to previous applications, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose an artificial neural network (ANN) equalizer for transmission performance enhancement of coherent optical OFDM (C-OOFDM) signals. The ANN equalizer showed more efficiency in combating both chromatic dispersion (CD) and single-mode fibre (SMF)-induced non-linearities compared to the least mean square (LMS). The equalizer can offer a 1.5 dB improvement in optical signal-to-noise ratio (OSNR) compared to LMS algorithm for 40 Gbit/s C-OOFDM signals when considering only CD. It is also revealed that ANN can double the transmission distance up to 320 km of SMF compared to the case of LMS, providing a nonlinearity tolerance improvement of ∼0.7 dB OSNR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a Wiener-Hammerstein (W-H) channel estimation algorithm for Long-Term Evolution (LTE) systems. The LTE standard provides known data as pilot symbols and exploits them through coherent detection to improve system performance. These drivers are placed in a hybrid way to cover up both time and frequency domain. Our aim is to adapt the W-H equalizer (W-H/E) to LTE standard for compensation of both linear and nonlinear effects induced by power amplifiers and multipath channels. We evaluate the performance of the W-H/E for a Downlink LTE system in terms of BLER, EVM and Throughput versus SNR. Afterwards, we compare the results with a traditional Least-Mean Square (LMS) equalizer. It is shown that W-H/E can significantly reduce both linear and nonlinear distortions compared to LMS and improve LTE Downlink system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Receptor activity modifying proteins (RAMPs) are a family of single-pass transmembrane proteins that dimerize with G-protein-coupled receptors. They may alter the ligand recognition properties of the receptors (particularly for the calcitonin receptor-like receptor, CLR). Very little structural information is available about RAMPs. Here, an ab initio model has been generated for the extracellular domain of RAMP1. The disulfide bond arrangement (Cys 27-Cys82, Cys40-Cys72, and Cys 57-Cys104) was determined by site-directed mutagenesis. The secondary structure (a-helices from residues 29-51, 60-80, and 87-100) was established from a consensus of predictive routines. Using these constraints, an assemblage of 25,000 structures was constructed and these were ranked using an all-atom statistical potential. The best 1000 conformations were energy minimized. The lowest scoring model was refined by molecular dynamics simulation. To validate our strategy, the same methods were applied to three proteins of known structure; PDB:1HP8, PDB:1V54 chain H (residues 21-85), and PDB:1T0P. When compared to the crystal structures, the models had root mean-square deviations of 3.8 Å, 4.1 Å, and 4.0 Å, respectively. The model of RAMP1 suggested that Phe93, Tyr 100, and Phe101 form a binding interface for CLR, whereas Trp74 and Phe92 may interact with ligands that bind to the CLR/RAMP1 heterodimer. © 2006 by the Biophysical Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural networks can be regarded as statistical models, and can be analysed in a Bayesian framework. Generalisation is measured by the performance on independent test data drawn from the same distribution as the training data. Such performance can be quantified by the posterior average of the information divergence between the true and the model distributions. Averaging over the Bayesian posterior guarantees internal coherence; Using information divergence guarantees invariance with respect to representation. The theory generalises the least mean squares theory for linear Gaussian models to general problems of statistical estimation. The main results are: (1)~the ideal optimal estimate is always given by average over the posterior; (2)~the optimal estimate within a computational model is given by the projection of the ideal estimate to the model. This incidentally shows some currently popular methods dealing with hyperpriors are in general unnecessary and misleading. The extension of information divergence to positive normalisable measures reveals a remarkable relation between the dlt dual affine geometry of statistical manifolds and the geometry of the dual pair of Banach spaces Ld and Ldd. It therefore offers conceptual simplification to information geometry. The general conclusion on the issue of evaluating neural network learning rules and other statistical inference methods is that such evaluations are only meaningful under three assumptions: The prior P(p), describing the environment of all the problems; the divergence Dd, specifying the requirement of the task; and the model Q, specifying available computing resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To evaluate the effects of instrument realignment and angular misalignment during the clinical determination of wavefront aberrations by simulation in model eyes. Setting: Aston Academy of Life Sciences, Aston University, Birmingham, United Kingdom. Methods: Six model eyes were examined with wavefront-aberration-supported cornea ablation (WASCA) (Carl Zeiss Meditec) in 4 sessions of 10 measurements each: sessions 1 and 2, consecutive repeated measures without realignment; session 3, realignment of the instrument between readings; session 4, measurements without realignment but with the model eye shifted 6 degrees angularly. Intersession repeatability and the effects of realignment and misalignment were obtained by comparing the measurements in the various sessions for coma, spherical aberration, and higher-order aberrations (HOAs). Results: The mean differences between the 2 sessions without realignment of the instrument were 0.020 μm ± 0.076 (SD) for Z3 - 1(P = .551), 0.009 ± 0.139 μm for Z3 1(P = .877), 0.004 ± 0.037 μm for Z4 0 (P = .820), and 0.005 ± 0.01 μm for HO root mean square (RMS) (P = .301). Differences between the nonrealigned and realigned instruments were -0.017 ± 0.026 μm for Z3 - 1(P = .159), 0.009 ± 0.028 μm for Z3 1 (P = .475), 0.007 ± 0.014 μm for Z4 0(P = .296), and 0.002 ± 0.007 μm for HO RMS (P = 0.529; differences between centered and misaligned instruments were -0.355 ± 0.149 μm for Z3 - 1 (P = .002), 0.007 ± 0.034 μm for Z3 1(P = .620), -0.005 ± 0.081 μm for Z4 0(P = .885), and 0.012 ± 0.020 μm for HO RMS (P = .195). Realignment increased the standard deviation by a factor of 3 compared with the first session without realignment. Conclusions: Repeatability of the WASCA was excellent in all situations tested. Realignment substantially increased the variance of the measurements. Angular misalignment can result in significant errors, particularly in the determination of coma. These findings are important when assessing highly aberrated eyes during follow-up or before surgery. © 2007 ASCRS and ESCRS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: A clinical evaluation of the Grand Seiko Auto Ref/Keratometer WAM-5500 (Japan) was performed to evaluate validity and repeatability compared with non-cycloplegic subjective refraction and Javal–Schiotz keratometry. An investigation into the dynamic recording capabilities of the instrument was also conducted. Methods: Refractive error measurements were obtained from 150 eyes of 75 subjects (aged 25.12 ± 9.03 years), subjectively by a masked optometrist, and objectively with the WAM-5500 at a second session. Keratometry measurements from the WAM-5500 were compared to Javal–Schiotz readings. Intratest variability was examined on all subjects, whilst intertest variability was assessed on a subgroup of 44 eyes 7–14 days after the initial objective measures. The accuracy of the dynamic recording mode of the instrument and its tolerance to longitudinal movement was evaluated using a model eye. An additional evaluation of the dynamic mode was performed using a human eye in relaxed and accommodated states. Results: Refractive error determined by the WAM-5500 was found to be very similar (p = 0.77) to subjective refraction (difference, -0.01 ± 0.38 D). The instrument was accurate and reliable over a wide range of refractive errors (-6.38 to +4.88 D). WAM-5500 keratometry values were steeper by approximately 0.05 mm in both the vertical and horizontal meridians. High intertest repeatability was demonstrated for all parameters measured: for sphere, cylinder power and MSE, over 90% of retest values fell within ±0.50 D of initial testing. In dynamic (high-speed) mode, the root-mean-square of the fluctuations was 0.005 ± 0.0005 D and a high level of recording accuracy was maintained when the measurement ring was significantly blurred by longitudinal movement of the instrument head. Conclusion: The WAM-5500 Auto Ref/Keratometer represents a reliable and valid objective refraction tool for general optometric practice, with important additional features allowing pupil size determination and easy conversion into high-speed mode, increasing its usefulness post-surgically following accommodating intra-ocular lens implantation, and as a research tool in the study of accommodation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently Homer and Percival have postulated that intermolecular van der Waals dispersion forces can be characterized by three mechanisms. The first arises via the mean square reaction field < R1; 2> due to the transient dipole of a particular solute molecule that is considered situated in a cavity surrounded by solvent molecules; this was characterized by an extended Onsager approach. The second stems from the extra cavity mean square reaction field < R2; 2> of the near neighbour solvent molecules. The third originates from square field electric fields E2BI due to a newly characterized effect in which solute atoms are `buffeted' by the peripheral atoms of adjacent solvent molecules. The present work concerns more detailed studies of the buffeting screening, which is governed by sterically controlled parameter (2T - T)2, where and are geometric structural parameters. The original approach is used to characterise the buffeting shifts induced by large solvent molecules and the approach is found to be inadequate. Consequently, improved methods of calculating and are reported. Using the improved approach it is shown that buffeting is dependent on the nature of the solvent as well as the nature of the solute molecule. Detailed investigation of the buffeting component of the van der Waals chemical shifts of selected solutes in a range of solvents containing either H or Cl as peripheral atoms have enabled the determination of a theoretical acceptable value for the classical screening coefficient B for protons. 1H and 13C resonance studies of tetraethylmethane and 1H, 13C and 29Si resonance studies of TMS have been used to support the original contention that three (< R1; 2> , < R2; 2> and E2BI) components of intermolecular van der Waals dispersion fields are required to characterise vdW chemical shifts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nearest feature line-based subspace analysis is first proposed in this paper. Compared with conventional methods, the newly proposed one brings better generalization performance and incremental analysis. The projection point and feature line distance are expressed as a function of a subspace, which is obtained by minimizing the mean square feature line distance. Moreover, by adopting stochastic approximation rule to minimize the objective function in a gradient manner, the new method can be performed in an incremental mode, which makes it working well upon future data. Experimental results on the FERET face database and the UCI satellite image database demonstrate the effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Few-mode fiber transmission systems are typically impaired by mode-dependent loss (MDL). In an MDL-impaired link, maximum-likelihood (ML) detection yields a significant advantage in system performance compared to linear equalizers, such as zero-forcing and minimum-mean square error equalizers. However, the computational effort of the ML detection increases exponentially with the number of modes and the cardinality of the constellation. We present two methods that allow for near-ML performance without being afflicted with the enormous computational complexity of ML detection: improved reduced-search ML detection and sphere decoding. Both algorithms are tested regarding their performance and computational complexity in simulations of three and six spatial modes with QPSK and 16QAM constellations.