872 resultados para Signal detection theory
Resumo:
Disturbances in reward processing have been implicated in bulimia nervosa (BN). Abnormalities in processing reward-related stimuli might be linked to dysfunctions of the catecholaminergic neurotransmitter system, but findings have been inconclusive. A powerful way to investigate the relationship between catecholaminergic function and behavior is to examine behavioral changes in response to experimental catecholamine depletion (CD). The purpose of this study was to uncover putative catecholaminergic dysfunction in remitted subjects with BN who performed a reinforcement-learning task after CD. CD was achieved by oral alpha-methyl-para-tyrosine (AMPT) in 19 unmedicated female subjects with remitted BN (rBN) and 28 demographically matched healthy female controls (HC). Sham depletion administered identical capsules containing diphenhydramine. The study design consisted of a randomized, double-blind, placebo-controlled crossover, single-site experimental trial. The main outcome measures were reward learning in a probabilistic reward task analyzed using signal-detection theory. Secondary outcome measures included self-report assessments, including the Eating Disorder Examination-Questionnaire. Relative to healthy controls, rBN subjects were characterized by blunted reward learning in the AMPT-but not in placebo-condition. Highlighting the specificity of these findings, groups did not differ in their ability to perceptually distinguish between stimuli. Increased CD-induced anhedonic (but not eating disorder) symptoms were associated with a reduced response bias toward a more frequently rewarded stimulus. In conclusion, under CD, rBN subjects showed reduced reward learning compared with healthy control subjects. These deficits uncover disturbance of the central reward processing systems in rBN related to altered brain catecholamine levels, which might reflect a trait-like deficit increasing vulnerability to BN.
Resumo:
We investigated the influence of playing a video game on children’s ability to distinguish between fantasy and reality. School-age children played a platform game for 15 min and then performed a fantasy/reality distinction task in which they were to judge whether images (from the platform game and from other games) were fantasy images or reality images. Unlike those in the control group (who played a memory game), the children in the experimental group showed a response bias toward mistakenly classifying reality images from the video game as fantasy images (as determined by means of an analysis based on signal detection theory). We conclude that playing the video game exerted a short-term influence on children’s ability to distinguish between fantasy and reality.
Resumo:
On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.
Resumo:
Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.
Resumo:
Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.
Resumo:
Supported by Royal Society of London (University Research Fellowship), Medical Research Council (New Investigator Research Grant) and CNRS.
Resumo:
Stimuli that cannot be perceived (i.e., that are subliminal) can still elicit neural responses in an observer, but can such stimuli influence behavior and higher-order cognition? Empirical evidence for such effects has periodically been accepted and rejected over the last six decades. Today, many psychologists seem to consider such effects well-established and recent studies have extended the power of subliminal processing to new limits. In this thesis, I examine whether this shift in zeitgeist is matched by a shift in evidential strength for the phenomenon. This thesis consists of three empirical studies involving more than 250 participants, a simulation study, and a quantitative review. The conclusion based on these efforts is that several methodological, statistical, and theoretical issues remain in studies of subliminal processing. These issues mean that claimed subliminal effects might be caused by occasional or weak percepts (given the experimenters’ own definitions of perception) and that it is still unclear what evidence there is for the cognitive processing of subliminal stimuli. New data are presented suggesting that even in conditions traditionally claimed as “subliminal”, occasional or weak percepts may in fact influence cognitive processing more strongly than do the physical stimuli, possibly leading to reversed priming effects. I also summarize and provide methodological, statistical, and theoretical recommendations that could benefit future research aspiring to provide solid evidence for subliminal cognitive processing.
Resumo:
Recent legislation and initiatives set forth high academic expectations for all high school graduates in the area of reading (National Governors Association Center for Best Practices, 2010; Every Student Succeeds Act, 2015). To determine which students need additional support to meet these reading standards, teachers can conduct universal screening using formative assessments. Maze Curriculum-Based Measurement (Maze-CBM) is a commonly used screening and progress monitoring assessment that the National Center on Intensive Intervention (2013) and the Center on Instruction (Torgesen & Miller, 2009) recommend. Despite the recommendation to use Maze-CBM, little research has been conducted on the reliability and validity of Maze-CBM for measuring reading ability for students at the secondary level (Mitchell & Wexler, 2016). In the papers included in this dissertation, I present an initial investigation into the use of Maze-CBM for secondary students. In the first paper, I investigated prior studies of Maze-CBM for students in Grades 6 through 12. Next, in the second paper, I investigated the alternate-form reliability and validity for screening students in Grades 9 and 10 using signal detection theory methods. In the third paper, I examined the effect of genre on Maze-CBM scores with a sample of students in Grades 9 and 10 using multilevel modeling. When writing these three papers, I discovered several important findings related to Maze-CBM. First, there are few studies that have investigated the technical adequacy of Maze-CBM for screening and progress monitoring students in Grades 6 through 12. Additionally, only two studies (McMaster, Wayman, & Cao, 2006; Pierce, McMaster, & Deno, 2010) examined the technical adequacy of Maze-CBM for high school students. A second finding is that the reliability of Maze-CBM is often below acceptable levels for making screening decisions or progress monitoring decisions (.80 and above and .90 and above, respectively; Salvia, Ysseldyke, & Bolt, 2007) for secondary students. A third finding is that Maze-CBM scores show promise of being a valid screening tool for reading ability of secondary students. Finally, I found that the genre of the text used in the Maze-CBM assessment does impact scores on Maze-CBM for students in Grades 9 and 10.
Resumo:
In this paper, we consider the application of belief propagation (BP) to achieve near-optimal signal detection in large multiple-input multiple-output (MIMO) systems at low complexities. Large-MIMO architectures based on spatial multiplexing (V-BLAST) as well as non-orthogonal space-time block codes(STBC) from cyclic division algebra (CDA) are considered. We adopt graphical models based on Markov random fields (MRF) and factor graphs (FG). In the MRF based approach, we use pairwise compatibility functions although the graphical models of MIMO systems are fully/densely connected. In the FG approach, we employ a Gaussian approximation (GA) of the multi-antenna interference, which significantly reduces the complexity while achieving very good performance for large dimensions. We show that i) both MRF and FG based BP approaches exhibit large-system behavior, where increasingly closer to optimal performance is achieved with increasing number of dimensions, and ii) damping of messages/beliefs significantly improves the bit error performance.
Resumo:
In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).
Resumo:
Traditional mathematical tools, like Fourier Analysis, have proven to be efficient when analyzing steady-state distortions; however, the growing utilization of electronically controlled loads and the generation of a new dynamics in industrial environments signals have suggested the need of a powerful tool to perform the analysis of non-stationary distortions, overcoming limitations of frequency techniques. Wavelet Theory provides a new approach to harmonic analysis, focusing the decomposition of a signal into non-sinusoidal components, which are translated and scaled in time, generating a time-frequency basis. The correct choice of the waveshape to be used in decomposition is very important and discussed in this work. A brief theoretical introduction on Wavelet Transform is presented and some cases (practical and simulated) are discussed. Distortions commonly found in industrial environments, such as the current waveform of a Switched-Mode Power Supply and the input phase voltage waveform of motor fed by inverter are analyzed using Wavelet Theory. Applications such as extracting the fundamental frequency of a non-sinusoidal current signal, or using the ability of compact representation to detect non-repetitive disturbances are presented.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
We propose a multi-layer spectrum sensing optimisation algorithm to maximise sensing efficiency by computing the optimal sensing and transmission durations for a fast changing, dynamic primary user. Dynamic primary user traffic is modelled as a random process, where the primary user changes states during both the sensing period and transmission period to reflect a more realistic scenario. Furthermore, we formulate joint constraints to correctly reflect interference to the primary user and lost opportunity of the secondary user during the transmission period. Finally, we implement a novel duty cycle based detector that is optimised with respect to PU traffic to accurately detect primary user activity during the sensing period. Simulation results show that unlike currently used detection models, the proposed algorithm can jointly optimise the sensing and transmission durations to simultaneously satisfy the optimisation constraints for the considered primary user traffic.