13 resultados para Generalized Gaussian-noise
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.
Resumo:
The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Resumo:
The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.
Resumo:
In this paper we investigate the influence of a power-law noise model, also called noise, on the performance of a feed-forward neural network used to predict time series. We introduce an optimization procedure that optimizes the parameters the neural networks by maximizing the likelihood function based on the power-law model. We show that our optimization procedure minimizes the mean squared leading to an optimal prediction. Further, we present numerical results applying method to time series from the logistic map and the annual number of sunspots demonstrate that a power-law noise model gives better results than a Gaussian model.
Resumo:
Aiming to establish a rigorous link between macroscopic random motion (described e.g. by Langevin-type theories) and microscopic dynamics, we have undertaken a kinetic-theoretical study of the dynamics of a classical test-particle weakly coupled to a large heat-bath in thermal equilibrium. Both subsystems are subject to an external force field. From the (time-non-local) generalized master equation a Fokker-Planck-type equation follows as a "quasi-Markovian" approximation. The kinetic operator thus defined is shown to be ill-defined; in specific, it does not preserve the positivity of the test-particle distribution function f(x, v; t). Adopting an alternative approach, previously introduced for quantum open systems, is proposed to lead to a correct kinetic operator, which yields all the expected properties. A set of explicit expressions for the diffusion and drift coefficients are obtained, allowing for modelling macroscopic diffusion and dynamical friction phenomena, in terms of an external field and intrinsic physical parameters.
Resumo:
This paper presents generalized Laplacian eigenmaps, a novel dimensionality reduction approach designed to address stylistic variations in time series. It generates compact and coherent continuous spaces whose geometry is data-driven. This paper also introduces graph-based particle filter, a novel methodology conceived for efficient tracking in low dimensional space derived from a spectral dimensionality reduction method. Its strengths are a propagation scheme, which facilitates the prediction in time and style, and a noise model coherent with the manifold, which prevents divergence, and increases robustness. Experiments show that a combination of both techniques achieves state-of-the-art performance for human pose tracking in underconstrained scenarios.
Resumo:
We propose transmit antenna selection with receive generalized selection combining (TAS/GSC) in dual-hop cognitive decode-and-forward (DF) relay networks for reliability enhancement and interference relaxation. In this paradigm, a single antenna which maximizes the receive signal-to-noise ratio (SNR) is selected at the secondary transmitter and a subset of receive antennas with the highest SNRs are combined at the secondary receiver. To demonstrate the impact of multiple primary users on the cognitive relay network, we derive new closed-form expressions for the exact and asymptotic outage probability with TAS/GSC in the secondary network. Several important design insights are reached. We corroborate that the full diversity gain is achieved, which is entirely determined by the total number of antennas in the secondary network. The negative impact of the primary network on the secondary network is reflected in the SNR gain.
Resumo:
The generalized Langevin equation (GLE) has been recently suggested to simulate the time evolution of classical solid and molecular systems when considering general nonequilibrium processes. In this approach, a part of the whole system (an open system), which interacts and exchanges energy with its dissipative environment, is studied. Because the GLE is derived by projecting out exactly the harmonic environment, the coupling to it is realistic, while the equations of motion are non-Markovian. Although the GLE formalism has already found promising applications, e. g., in nanotribology and as a powerful thermostat for equilibration in classical molecular dynamics simulations, efficient algorithms to solve the GLE for realistic memory kernels are highly nontrivial, especially if the memory kernels decay nonexponentially. This is due to the fact that one has to generate a colored noise and take account of the memory effects in a consistent manner. In this paper, we present a simple, yet efficient, algorithm for solving the GLE for practical memory kernels and we demonstrate its capability for the exactly solvable case of a harmonic oscillator coupled to a Debye bath.
Resumo:
The generalized Langevin equation (GLE) method, as developed previously [L. Stella et al., Phys. Rev. B 89, 134303 (2014)], is used to calculate the dissipative dynamics of systems described at the atomic level. The GLE scheme goes beyond the commonly used bilinear coupling between the central system and the bath, and permits us to have a realistic description of both the dissipative central system and its surrounding bath. We show how to obtain the vibrational properties of a realistic bath and how to convey such properties into an extended Langevin dynamics by the use of the mapping of the bath vibrational properties onto a set of auxiliary variables. Our calculations for a model of a Lennard-Jones solid show that our GLE scheme provides a stable dynamics, with the dissipative/relaxation processes properly described. The total kinetic energy of the central system always thermalizes toward the expected bath temperature, with appropriate fluctuation around the mean value. More importantly, we obtain a velocity distribution for the individual atoms in the central system which follows the expected canonical distribution at the corresponding temperature. This confirms that both our GLE scheme and our mapping procedure onto an extended Langevin dynamics provide the correct thermostat. We also examined the velocity autocorrelation functions and compare our results with more conventional Langevin dynamics.
Resumo:
As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.