935 resultados para Adaptive signal detection


Relevância:

80.00% 80.00%

Publicador:

Resumo:

An improved inference method for densely connected systems is presented. The approach is based on passing condensed messages between variables, representing macroscopic averages of microscopic messages. We extend previous work that showed promising results in cases where the solution space is contiguous to cases where fragmentation occurs. We apply the method to the signal detection problem of Code Division Multiple Access (CDMA) for demonstrating its potential. A highly efficient practical algorithm is also derived on the basis of insight gained from the analysis. © EDP Sciences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With luminance gratings, psychophysical thresholds for detecting a small increase in the contrast of a weak ‘pedestal’ grating are 2–3 times lower than for detection of a grating when the pedestal is absent. This is the ‘dipper effect’ – a reliable improvement whose interpretation remains controversial. Analogies between luminance and depth (disparity) processing have attracted interest in the existence of a ‘disparity dipper’. Are thresholds for disparity modulation (corrugated surfaces), facilitated by the presence of a weak disparity-modulated pedestal? We used a 14-bit greyscale to render small disparities accurately, and measured 2AFC discrimination thresholds for disparity modulation (0.3 or 0.6 c/deg) of a random texture at various pedestal levels. In the first experiment, a clear dipper was found. Thresholds were about 2× lower with weak pedestals than without. But here the phase of modulation (0 or 180 deg) was varied from trial to trial. In a noisy signal-detection framework, this creates uncertainty that is reduced by the pedestal, which thus improves performance. When the uncertainty was eliminated by keeping phase constant within sessions, the dipper effect was weak or absent. Monte Carlo simulations showed that the influence of uncertainty could account well for the results of both experiments. A corollary is that the visual depth response to small disparities is probably linear, with no threshold-like nonlinearity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurement of detection and discrimination thresholds yields information about visual signal processing. For luminance contrast, we are 2 - 3 times more sensitive to a small increase in the contrast of a weak 'pedestal' grating, than when the pedestal is absent. This is the 'dipper effect' - a reliable improvement whose interpretation remains controversial. Analogies between luminance and depth (disparity) processing have attracted interest in the existence of a 'disparity dipper' - are thresholds for disparity, or disparity modulation (corrugated surfaces), facilitated by the presence of a weak pedestal? Lunn and Morgan (1997 Journal of the Optical Society of America A 14 360 - 371) found no dipper for disparity-modulated gratings, but technical limitations (8-bit greyscale) might have prevented the necessary measurement of very small disparity thresholds. We used a true 14-bit greyscale to render small disparities accurately, and measured 2AFC discrimination thresholds for disparity modulation (0.6 cycle deg-1) of a random texture at various pedestal levels. Which interval contained greater modulation of depth? In the first experiment, a clear dipper was found. Thresholds were about 2X1 lower with weak pedestals than without. But here the phase of modulation (0° or 180°) was randomised from trial to trial. In a noisy signal-detection framework, this creates uncertainty that is reduced by the pedestal, thus improving performance. When the uncertainty was eliminated by keeping phase constant within sessions, the dipper effect disappeared, confirming Lunn and Morgan's result. The absence of a dipper, coupled with shallow psychometric slopes, suggests that the visual response to small disparities is essentially linear, with no threshold-like nonlinearity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We studied the visual mechanisms that serve to encode spatial contrast at threshold and supra-threshold levels. In a 2AFC contrast-discrimination task, observers had to detect the presence of a vertical 1 cycle deg-1 test grating (of contrast dc) that was superimposed on a similar vertical 1 cycle deg-1 pedestal grating, whereas in pattern masking the test grating was accompanied by a very different masking grating (horizontal 1 cycle deg-1, or oblique 3 cycles deg-1). When expressed as threshold contrast (dc at 75% correct) versus mask contrast (c) our results confirm previous ones in showing a characteristic 'dipper function' for contrast discrimination but a smoothly increasing threshold for pattern masking. However, fresh insight is gained by analysing and modelling performance (p; percent correct) as a joint function of (c, dc) - the performance surface. In contrast discrimination, psychometric functions (p versus logdc) are markedly less steep when c is above threshold, but in pattern masking this reduction of slope did not occur. We explored a standard gain-control model with six free parameters. Three parameters control the contrast response of the detection mechanism and one parameter weights the mask contrast in the cross-channel suppression effect. We assume that signal-detection performance (d') is limited by additive noise of constant variance. Noise level and lapse rate are also fitted parameters of the model. We show that this model accounts very accurately for the whole performance surface in both types of masking, and thus explains the threshold functions and the pattern of variation in psychometric slopes. The cross-channel weight is about 0.20. The model shows that the mechanism response to contrast increment (dc) is linearised by the presence of pedestal contrasts but remains nonlinear in pattern masking.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance. © 2007 The American Physical Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A generalization of the Gram-Schmidt procedure is achieved by providing equations for updating and downdating oblique projectors. The work is motivated by the problem of adaptive signal representation outside the orthogonal basis setting. The proposed techniques are shown to be relevant to the problem of discriminating signals produced by different phenomena when the order of the signal model needs to be adjusted. © 2007 IOP Publishing Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A solar power satellite is paid attention to as a clean, inexhaustible large- scale base-load power supply. The following technology related to beam control is used: A pilot signal is sent from the power receiving site and after direction of arrival estimation the beam is directed back to the earth by same direction. A novel direction-finding algorithm based on linear prediction technique for exploiting cyclostationary statistical information (spatial and temporal) is explored. Many modulated communication signals exhibit a cyclostationarity (or periodic correlation) property, corresponding to the underlying periodicity arising from carrier frequencies or baud rates. The problem was solved by using both cyclic second-order statistics and cyclic higher-order statistics. By evaluating the corresponding cyclic statistics of the received data at certain cycle frequencies, we can extract the cyclic correlations of only signals with the same cycle frequency and null out the cyclic correlations of stationary additive noise and all other co-channel interferences with different cycle frequencies. Thus, the signal detection capability can be significantly improved. The proposed algorithms employ cyclic higher-order statistics of the array output and suppress additive Gaussian noise of unknown spectral content, even when the noise shares common cycle frequencies with the non-Gaussian signals of interest. The proposed method completely exploits temporal information (multiple lag ), and also can correctly estimate direction of arrival of desired signals by suppressing undesired signals. Our approach was generalized over direction of arrival estimation of cyclostationary coherent signals. In this paper, we propose a new approach for exploiting cyclostationarity that seems to be more advanced in comparison with the other existing direction finding algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We examine the performance of a nonlinear fiber gyroscope for improved signal detection beating the quantum limits of its linear counterparts. The performance is examined when the nonlinear gyroscope is illuminated by practical field states, such as coherent and quadrature squeezed states. This is compared with the case of more ideal probes such as photon-number states.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Supported by Royal Society of London (University Research Fellowship), Medical Research Council (New Investigator Research Grant) and CNRS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stimuli that cannot be perceived (i.e., that are subliminal) can still elicit neural responses in an observer, but can such stimuli influence behavior and higher-order cognition? Empirical evidence for such effects has periodically been accepted and rejected over the last six decades. Today, many psychologists seem to consider such effects well-established and recent studies have extended the power of subliminal processing to new limits. In this thesis, I examine whether this shift in zeitgeist is matched by a shift in evidential strength for the phenomenon. This thesis consists of three empirical studies involving more than 250 participants, a simulation study, and a quantitative review. The conclusion based on these efforts is that several methodological, statistical, and theoretical issues remain in studies of subliminal processing. These issues mean that claimed subliminal effects might be caused by occasional or weak percepts (given the experimenters’ own definitions of perception) and that it is still unclear what evidence there is for the cognitive processing of subliminal stimuli. New data are presented suggesting that even in conditions traditionally claimed as “subliminal”, occasional or weak percepts may in fact influence cognitive processing more strongly than do the physical stimuli, possibly leading to reversed priming effects. I also summarize and provide methodological, statistical, and theoretical recommendations that could benefit future research aspiring to provide solid evidence for subliminal cognitive processing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent legislation and initiatives set forth high academic expectations for all high school graduates in the area of reading (National Governors Association Center for Best Practices, 2010; Every Student Succeeds Act, 2015). To determine which students need additional support to meet these reading standards, teachers can conduct universal screening using formative assessments. Maze Curriculum-Based Measurement (Maze-CBM) is a commonly used screening and progress monitoring assessment that the National Center on Intensive Intervention (2013) and the Center on Instruction (Torgesen & Miller, 2009) recommend. Despite the recommendation to use Maze-CBM, little research has been conducted on the reliability and validity of Maze-CBM for measuring reading ability for students at the secondary level (Mitchell & Wexler, 2016). In the papers included in this dissertation, I present an initial investigation into the use of Maze-CBM for secondary students. In the first paper, I investigated prior studies of Maze-CBM for students in Grades 6 through 12. Next, in the second paper, I investigated the alternate-form reliability and validity for screening students in Grades 9 and 10 using signal detection theory methods. In the third paper, I examined the effect of genre on Maze-CBM scores with a sample of students in Grades 9 and 10 using multilevel modeling. When writing these three papers, I discovered several important findings related to Maze-CBM. First, there are few studies that have investigated the technical adequacy of Maze-CBM for screening and progress monitoring students in Grades 6 through 12. Additionally, only two studies (McMaster, Wayman, & Cao, 2006; Pierce, McMaster, & Deno, 2010) examined the technical adequacy of Maze-CBM for high school students. A second finding is that the reliability of Maze-CBM is often below acceptable levels for making screening decisions or progress monitoring decisions (.80 and above and .90 and above, respectively; Salvia, Ysseldyke, & Bolt, 2007) for secondary students. A third finding is that Maze-CBM scores show promise of being a valid screening tool for reading ability of secondary students. Finally, I found that the genre of the text used in the Maze-CBM assessment does impact scores on Maze-CBM for students in Grades 9 and 10.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design of molecular sensors plays a very important role within nanotechnology and especially in the development of different devices for biomedical applications. Biosensors can be classified according to various criteria such as the type of interaction established between the recognition element and the analyte or the type of signal detection from the analyte (transduction). When Raman spectroscopy is used as an optical transduction technique the variations in the Raman signal due to the physical or chemical interaction between the analyte and the recognition element has to be detected. Therefore any significant improvement in the amplification of the optical sensor signal represents a breakthrough in the design of molecular sensors. In this sense, Surface-Enhanced Raman Spectroscopy (SERS) involves an enormous enhancement of the Raman signal from a molecule in the vicinity of a metal surface. The main objective of this work is to evaluate the effect of a monolayer of graphene oxide (GO) on the distribution of metal nanoparticles (NPs) and on the global SERS enhancement of paminothiophenol (pATP) and 4-mercaptobenzoic acid (4MBA) adsorbed on this substrate. These aromatic bifunctional molecules are able to interact to metal NPs and also they offer the possibility to link with biomolecules. Additionally by decorating Au or Ag NPs on graphene sheets, a coupled EM effect caused by the aggregation of the NPs and strong electronic interactions between Au or Ag NPs and the graphene sheets are considered to be responsible for the significantly enhanced Raman signal of the analytes [1-2]. Since there are increasing needs for methods to conduct reproducible and sensitive Raman measurements, Grapheneenhanced Raman Scattering (GERS) is emerging as an important method [3].