34 resultados para Signal detection theory
em Aston University Research Archive
Resumo:
Efficient new Bayesian inference technique is employed for studying critical properties of the Ising linear perceptron and for signal detection in code division multiple access (CDMA). The approach is based on a recently introduced message passing technique for densely connected systems. Here we study both critical and non-critical regimes. Results obtained in the non-critical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also studied. © 2006 Elsevier B.V. All rights reserved.
Resumo:
Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.
Resumo:
An improved inference method for densely connected systems is presented. The approach is based on passing condensed messages between variables, representing macroscopic averages of microscopic messages. We extend previous work that showed promising results in cases where the solution space is contiguous to cases where fragmentation occurs. We apply the method to the signal detection problem of Code Division Multiple Access (CDMA) for demonstrating its potential. A highly efficient practical algorithm is also derived on the basis of insight gained from the analysis. © EDP Sciences.
Resumo:
With luminance gratings, psychophysical thresholds for detecting a small increase in the contrast of a weak ‘pedestal’ grating are 2–3 times lower than for detection of a grating when the pedestal is absent. This is the ‘dipper effect’ – a reliable improvement whose interpretation remains controversial. Analogies between luminance and depth (disparity) processing have attracted interest in the existence of a ‘disparity dipper’. Are thresholds for disparity modulation (corrugated surfaces), facilitated by the presence of a weak disparity-modulated pedestal? We used a 14-bit greyscale to render small disparities accurately, and measured 2AFC discrimination thresholds for disparity modulation (0.3 or 0.6 c/deg) of a random texture at various pedestal levels. In the first experiment, a clear dipper was found. Thresholds were about 2× lower with weak pedestals than without. But here the phase of modulation (0 or 180 deg) was varied from trial to trial. In a noisy signal-detection framework, this creates uncertainty that is reduced by the pedestal, which thus improves performance. When the uncertainty was eliminated by keeping phase constant within sessions, the dipper effect was weak or absent. Monte Carlo simulations showed that the influence of uncertainty could account well for the results of both experiments. A corollary is that the visual depth response to small disparities is probably linear, with no threshold-like nonlinearity.
Resumo:
Measurement of detection and discrimination thresholds yields information about visual signal processing. For luminance contrast, we are 2 - 3 times more sensitive to a small increase in the contrast of a weak 'pedestal' grating, than when the pedestal is absent. This is the 'dipper effect' - a reliable improvement whose interpretation remains controversial. Analogies between luminance and depth (disparity) processing have attracted interest in the existence of a 'disparity dipper' - are thresholds for disparity, or disparity modulation (corrugated surfaces), facilitated by the presence of a weak pedestal? Lunn and Morgan (1997 Journal of the Optical Society of America A 14 360 - 371) found no dipper for disparity-modulated gratings, but technical limitations (8-bit greyscale) might have prevented the necessary measurement of very small disparity thresholds. We used a true 14-bit greyscale to render small disparities accurately, and measured 2AFC discrimination thresholds for disparity modulation (0.6 cycle deg-1) of a random texture at various pedestal levels. Which interval contained greater modulation of depth? In the first experiment, a clear dipper was found. Thresholds were about 2X1 lower with weak pedestals than without. But here the phase of modulation (0° or 180°) was randomised from trial to trial. In a noisy signal-detection framework, this creates uncertainty that is reduced by the pedestal, thus improving performance. When the uncertainty was eliminated by keeping phase constant within sessions, the dipper effect disappeared, confirming Lunn and Morgan's result. The absence of a dipper, coupled with shallow psychometric slopes, suggests that the visual response to small disparities is essentially linear, with no threshold-like nonlinearity.
Resumo:
We studied the visual mechanisms that serve to encode spatial contrast at threshold and supra-threshold levels. In a 2AFC contrast-discrimination task, observers had to detect the presence of a vertical 1 cycle deg-1 test grating (of contrast dc) that was superimposed on a similar vertical 1 cycle deg-1 pedestal grating, whereas in pattern masking the test grating was accompanied by a very different masking grating (horizontal 1 cycle deg-1, or oblique 3 cycles deg-1). When expressed as threshold contrast (dc at 75% correct) versus mask contrast (c) our results confirm previous ones in showing a characteristic 'dipper function' for contrast discrimination but a smoothly increasing threshold for pattern masking. However, fresh insight is gained by analysing and modelling performance (p; percent correct) as a joint function of (c, dc) - the performance surface. In contrast discrimination, psychometric functions (p versus logdc) are markedly less steep when c is above threshold, but in pattern masking this reduction of slope did not occur. We explored a standard gain-control model with six free parameters. Three parameters control the contrast response of the detection mechanism and one parameter weights the mask contrast in the cross-channel suppression effect. We assume that signal-detection performance (d') is limited by additive noise of constant variance. Noise level and lapse rate are also fitted parameters of the model. We show that this model accounts very accurately for the whole performance surface in both types of masking, and thus explains the threshold functions and the pattern of variation in psychometric slopes. The cross-channel weight is about 0.20. The model shows that the mechanism response to contrast increment (dc) is linearised by the presence of pedestal contrasts but remains nonlinear in pattern masking.
Resumo:
An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance. © 2007 The American Physical Society.
Resumo:
A dual-peak LPFG (long-period fibre grating), inscribed in an optical fibre, has been employed to sense DNA hybridization in real time, over a 1 h period. One strand of the DNA was immobilized on the fibre, while the other was free in solution. After hybridization, the fibre was stripped and repeated detection of hybridization was achieved, so demonstrating reusability of the device. Neither strand of DNA was fluorescently or otherwise labelled. The present paper will provide an overview of our early-stage experimental data and methodology, examine the potential of fibre gratings for use as biosensors to monitor both nucleic acid and other biomolecular interactions and then give a summary of the theory and fabrication of fibre gratings from a biological standpoint. Finally, the potential of improving signal strength and possible future directions of fibre grating biosensors will be addressed.
Resumo:
We analyze theoretically the interplay between optical return-to-zero signal degradation due to timing jitter and additive amplified-spontaneous-emission noise. The impact of these two factors on the performance of a square-law direct detection receiver is also investigated. We derive an analytical expression for the bit-error probability and quantitatively determine the conditions when the contributions of the effects of timing jitter and additive noise to the bit error rate can be treated separately. The analysis of patterning effects is also presented. © 2007 IEEE.
Resumo:
A novel approach, based on statistical mechanics, to analyze typical performance of optimum code-division multiple-access (CDMA) multiuser detectors is reviewed. A `black-box' view ot the basic CDMA channel is introduced, based on which the CDMA multiuser detection problem is regarded as a `learning-from-examples' problem of the `binary linear perceptron' in the neural network literature. Adopting Bayes framework, analysis of the performance of the optimum CDMA multiuser detectors is reduced to evaluation of the average of the cumulant generating function of a relevant posterior distribution. The evaluation of the average cumulant generating function is done, based on formal analogy with a similar calculation appearing in the spin glass theory in statistical mechanics, by making use of the replica method, a method developed in the spin glass theory.
Resumo:
A wire drive pulse echo method of measuring the spectrum of solid bodies described. Using an 's' plane representation, a general analysis of the transient response of such solids has been carried out. This was used for the study of the stepped amplitude transient of high order modes of disks and for the case where there are two adjacent resonant frequencies. The techniques developed have been applied to the measurenent of the elasticities of refractory materials at high temperatures. In the experimental study of the high order in-plane resonances of thin disks it was found that the energy travelled at the edge of the disk and this initiated the work on one dimensional Rayleigh waves.Their properties were established for the straight edge condition by following an analysis similar to that of the two dimensional case. Experiments were then carried out on the velocity dispersion of various circuits including the disk and a hole in a large plate - the negative curvature condition.Theoretical analysis established the phase and group velocities for these cases and experimental tests on aluminium and glass gave good agreement with theory. At high frequencies all velocities approach that of the one dimensional Rayleigh waves. When applied to crack detection it was observed that a signal burst travelling round a disk showed an anomalous amplitude effect. In certain cases the signal which travelled the greater distance had the greater amplitude.An experiment was designed to investigate the phenanenon and it was established that the energy travelled in two nodes with different velocities.It was found by analysis that as well as the Rayleigh surface wave on the edge, a seoond node travelling at about the shear velocity was excited and the calculated results gave reasonable agreement with the experiments.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
This thesis discusses the need for nondestructive testing and highlights some of the limitations in present day techniques. Special interest has been given to ultrasonic examination techniques and the problems encountered when they are applied to thick welded plates. Some suggestions are given using signal processing methods. Chapter 2 treats the need for nondestructive testing as seen in the light of economy and safety. A short review of present day techniques in nondestructive testing is also given. The special problems using ultrasonic techniques for welded structures is discussed in Chapter 3 with some examples of elastic wave propagation in welded steel. The limitations in applying sophisticated signal processing techniques to ultrasonic NDT~ mainly found in the transducers generating or receiving the ultrasound. Chapter 4 deals with the different transducers used. One of the difficulties with ultrasonic testing is the interpretation of the signals encountered. Similar problems might be found with SONAR/RADAR techniques and Chapter 5 draws some analogies between SONAR/RADAR and ultrasonic nondestructive testing. This chapter also includes a discussion on some on the techniques used in signal processing in general. A special signal processing technique found useful is cross-correlation detection and this technique is treated in Chapter 6. Electronic digital compute.rs have made signal processing techniques easier to implement -Chapter 7 discusses the use of digital computers in ultrasonic NDT. Experimental equipment used to test cross-correlation detection of ultrasonic signals is described in Chapter 8. Chapter 9 summarises the conclusions drawn during this investigation.
Resumo:
One of the major problems associated with communication via a loudspeaking telephone (LST) is that, using analogue processing, duplex transmission is limited to low-loss lines and produces a low acoustic output. An architectural for an instrument has been developed and tested, which uses digital signal processing to provide duplex transmission between a LST and a telopnone handset over most of the B.T. network. Digital adaptive-filters are used in the duplex LST to cancel coupling between the loudspeaker and microphone, and across the transmit to receive paths of the 2-to-4-wire converter. Normal movement of a person in the acoustic path causes a loss of stability by increasing the level of coupling from the loudspeaker to the microphone, since there is a lag associated the adaptive filters learning about a non-stationary path, Control of the loop stability and the level of sidetone heard by the hadset user is by a microprocessoe, which continually monitors the system and regulates the gain. The result is a system which offers the best compromise available based on a set of measured parameters.A theory has been developed which gives the loop stability requirements based on the error between the parameters of the filter and those of the unknown path. The programme to develope a low-cost adaptive filter in LST produced a low-cost adaptive filter in LST produced a unique architecture which has a number of features not available in any similar system. These include automatic compensation for the rate of adaptation over a 36 dB range of output level, , 4 rates of adaptation (with a maximum of 465 dB/s), plus the ability to cascade up to 4 filters without loss o performance. A complex story has been developed to determine the adptation which can be achieved using finite-precision arithmatic. This enabled the development of an architecture which distributed the normalisation required to achieve optimum rate of adaptation over the useful input range. Comparison of theory and measurement for the adaptive filter show very close agreement. A single experimental LST was built and tested on connections to hanset telephones over the BT network. The LST demonstrated that duplex transmission was feasible using signal processing and produced a more comfortable means of communication beween people than methods emplying deep voice-switching to regulate the local-loop gain. Although, with the current level of processing power, it is not a panacea and attention must be directed toward the physical acoustic isolation between loudspeaker and microphone.