93 resultados para Speckle Noise
Resumo:
The thesis will show how to equalise the effect of quantal noise across spatial frequencies by keeping the retinal flux (If-2) constant. In addition, quantal noise is used to study the effect of grating area and spatial frequency on contrast sensitivity resulting in the extension of the new contrast detection model describing the human contrast detection system as a simple image processor. According to the model the human contrast detection system comprises low-pass filtering due to ocular optics, addition of light dependent noise at the event of quantal absorption, high-pass filtering due to the neural visual pathways, addition of internal neural noise, after which detection takes place by a local matched filter, whose sampling efficiency decreases as grating area is increased. Furthermore, this work will demonstrate how to extract both the optical and neural modulation transfer functions of the human eye. The neural transfer function is found to be proportional to spatial frequency up to the local cut-off frequency at eccentricities of 0 - 37 deg across the visual field. The optical transfer function of the human eye is proposed to be more affected by the Stiles-Crawford -effect than generally assumed in the literature. Similarly, this work questions the prevailing ideas about the factors limiting peripheral vision by showing that peripheral optical acts as a low-pass filter in normal viewing conditions, and therefore the effect of peripheral optics is worse than generally assumed.
Resumo:
This thesis consisted of two major parts, one determining the masking characteristics of pixel noise and the other investigating the properties of the detection filter employed by the visual system. The theoretical cut-off frequency of white pixel noise can be defined from the size of the noise pixel. The empirical cut-off frequency, i.e. the largest size of noise pixels that mimics the effect of white noise in detection, was determined by measuring contrast energy thresholds for grating stimuli in the presence of spatial noise consisting of noise pixels of various sizes and shapes. The critical i.e. minimum number of noise pixels per grating cycle needed to mimic the effect of white noise in detection was found to decrease with the bandwidth of the stimulus. The shape of the noise pixels did not have any effect on the whiteness of pixel noise as long as there was at least the minimum number of noise pixels in all spatial dimensions. Furthermore, the masking power of white pixel noise is best described when the spectral density is calculated by taking into account all the dimensions of noise pixels, i.e. width, height, and duration, even when there is random luminance only in one of these dimensions. The properties of the detection mechanism employed by the visual system were studied by measuring contrast energy thresholds for complex spatial patterns as a function of area in the presence of white pixel noise. Human detection efficiency was obtained by comparing human performance with an ideal detector. The stimuli consisted of band-pass filtered symbols, uniform and patched gratings, and point stimuli with randomised phase spectra. In agreement with the existing literature, the detection performance was found to decline with the increasing amount of detail and contour in the stimulus. A measure of image complexity was developed and successfully applied to the data. The accuracy of the detection mechanism seems to depend on the spatial structure of the stimulus and the spatial spread of contrast energy.
Resumo:
The work described in this thesis is directed towards the reduction of noise levels in the Hoover Turbopower upright vacuum cleaner. The experimental work embodies a study of such factors as the application of noise source identification techniques, investigation of the noise generating principles for each major source and evaluation of the noise reducing treatments. It was found that the design of the vacuum cleaner had not been optimised from the standpoint of noise emission. Important factors such as noise `windows', isolation of vibration at the source, panel rattle, resonances and critical speeds had not been considered. Therefore, a number of experimentally validated treatments are proposed. Their noise reduction benefit together with material and tooling costs are presented. The solutions to the noise problems were evaluated on a standard Turbopower and the sound power level of the cleaner was reduced from 87.5 dB(A) to 80.4 db(A) at a cost of 93.6 pence per cleaner.The designers' lack of experience in noise reduction was identified as one of the factors for the low priority given to noise during design of the cleaner. Consequently, the fundamentals of acoustics, principles of noise prediction and absorption and guidelines for good acoustical design were collated into a Handbook and circulated at Hoover plc.Mechanical variations during production of the motor and the cleaner were found to be important. These caused a vast spread in the noise levels of the cleaners. Subsequently, the manufacturing processes were briefly studied to identify their source and recommendations for improvement are made.Noise of a product is quality related and a high level of noise is considered to be a bad feature. This project suggested that the noise level be used constructively both as a test on the production line to identify cleaners above a certain noise level and also to promote the product by `designing' the characteristics of the sound so that the appliance is pleasant to the user. This project showed that good noise control principles should be implemented early in the design stage.As yet there are no mandatory noise limits or noise-labelling requirements for household appliances. However, the literature suggests that noise-labelling is likely in the near future and the requirement will be to display the A-weighted sound power level. However, the `noys' scale of perceived noisiness was found more appropriate to the rating of appliance noise both as it is linear and therefore, a sound level that seems twice as loud is twice the value in noys and also takes into consideration the presence of pure tones, which even in the absence of a high noise level can lead to annoyance.
Resumo:
In this chapter we present the relevant mathematical background to address two well defined signal and image processing problems. Namely, the problem of structured noise filtering and the problem of interpolation of missing data. The former is addressed by recourse to oblique projection based techniques whilst the latter, which can be considered equivalent to impulsive noise filtering, is tackled by appropriate interpolation methods.
Resumo:
The problem of structured noise suppression is addressed by i)modelling the subspaces hosting the components of the signal conveying the information and ii)applying a nonlin- ear non-extensive technique for effecting the right separation. Although the approach is applicable to all situations satisfying the hypothesis of the proposed framework, this work is motivated by a particular scenario, namely, the cancellation of low frequency noise in broadband seismic signals.
Resumo:
Binaural pitches are auditory percepts that emerge from combined inputs to the ears but that cannot be heard if the stimulus is presented to either ear alone. Here, we describe a binaural pitch that is not easily accommodated within current models of binaural processing. Convergent magnetoencephalography (MEG) and psychophysical measurements were used to characterize the pitch, heard when band-limited noise had a rapidly changing interaural phase difference. Several interesting features emerged: First, the pitch was perceptually lateralized, in agreement with the lateralization of the evoked changes in MEG spectral power, and its salience depended on dichotic binaural presentation. Second, the frequency of the pure tone that matched the binaural pitch lay within a lower spectral sideband of the phase-modulated noise and followed the frequency of that sideband when the modulation frequency or center frequency and bandwidth of the noise changed. Thus, the binaural pitch depended on the processing of binaural information in that lower sideband.
Resumo:
Impairments characterization and performance evaluation of Raman amplified unrepeated DP-16QAM transmissions are conducted. Experimental results indicate that small gain in forward direction enhance the system signal-to-noise ratio for longer reach without introducing noticeable penalty.
Resumo:
The transmission of weak signals through the visual system is limited by internal noise. Its level can be estimated by adding external noise, which increases the variance within the detecting mechanism, causing masking. But experiments with white noise fail to meet three predictions: (a) noise has too small an influence on the slope of the psychometric function, (b) masking occurs even when the noise sample is identical in each two-alternative forced-choice (2AFC) interval, and (c) double-pass consistency is too low. We show that much of the energy of 2D white noise masks extends well beyond the pass-band of plausible detecting mechanisms and that this suppresses signal activity. These problems are avoided by restricting the external noise energy to the target mechanisms by introducing a pedestal with a mean contrast of 0% and independent contrast jitter in each 2AFC interval (termed zero-dimensional [0D] noise). We compared the jitter condition to masking from 2D white noise in double-pass masking and (novel) contrast matching experiments. Zero-dimensional noise produced the strongest masking, greatest double-pass consistency, and no suppression of perceived contrast, consistent with a noisy ideal observer. Deviations from this behavior for 2D white noise were explained by cross-channel suppression with no need to appeal to induced internal noise or uncertainty. We conclude that (a) results from previous experiments using white pixel noise should be re-evaluated and (b) 0D noise provides a cleaner method for investigating internal variability than pixel noise. Ironically then, the best external noise stimulus does not look noisy.
Resumo:
The major challenge of MEG, the inverse problem, is to estimate the very weak primary neuronal currents from the measurements of extracranial magnetic fields. The non-uniqueness of this inverse solution is compounded by the fact that MEG signals contain large environmental and physiological noise that further complicates the problem. In this paper, we evaluate the effectiveness of magnetic noise cancellation by synthetic gradiometers and the beamformer analysis method of synthetic aperture magnetometry (SAM) for source localisation in the presence of large stimulus-generated noise. We demonstrate that activation of primary somatosensory cortex can be accurately identified using SAM despite the presence of significant stimulus-related magnetic interference. This interference was generated by a contact heat evoked potential stimulator (CHEPS), recently developed for thermal pain research, but which to date has not been used in a MEG environment. We also show that in a reduced shielding environment the use of higher order synthetic gradiometry is sufficient to obtain signal-to-noise ratios (SNRs) that allow for accurate localisation of cortical sensory function.
Resumo:
We demonstrate a novel Rayleigh interferometric noise mitigation scheme for applications in carrier-distributed dense wavelength division multiplexed (DWDM) passive optical networks at 10 Gbit/s using carrier suppressed subcarrier-amplitude modulated phase shift keying modulation. The required optical signal to Rayleigh noise ratio is reduced by 12 dB, while achieving excellent tolerance to dispersion, subcarrier frequency and drive amplitude variations.
Resumo:
Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.
Resumo:
Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.
Resumo:
We present a simplified model for a simple estimation of the eye-closure penalty for amplitude noise-degraded signals. Using a typical 40-Gbit/s return-to-zero amplitude-shift-keying transmission, we demonstrate agreement between the model predictions and the results obtained from the conventional numerical estimation method over several thousand kilometers.