921 resultados para Signal detection
Resumo:
One major assumption in all orthogonal space-time block coding (O-STBC) schemes is that the channel remains static over the entire length of the codeword. However, time selective fading channels do exist, and in such case the conventional O-STBC detectors can suffer from a large error floor in the high signal-to-noise ratio (SNR) cases. This paper addresses such an issue by introducing a parallel interference cancellation (PIC) based detector for the Gi coded systems (i=3 and 4).
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
Efficient new Bayesian inference technique is employed for studying critical properties of the Ising linear perceptron and for signal detection in code division multiple access (CDMA). The approach is based on a recently introduced message passing technique for densely connected systems. Here we study both critical and non-critical regimes. Results obtained in the non-critical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also studied. © 2006 Elsevier B.V. All rights reserved.
Resumo:
Similar to classic Signal Detection Theory (SDT), recent optimal Binary Signal Detection Theory (BSDT) and based on it Neural Network Assembly Memory Model (NNAMM) can successfully reproduce Receiver Operating Characteristic (ROC) curves although BSDT/NNAMM parameters (intensity of cue and neuron threshold) and classic SDT parameters (perception distance and response bias) are essentially different. In present work BSDT/NNAMM optimal likelihood and posterior probabilities are analytically analyzed and used to generate ROCs and modified (posterior) mROCs, optimal overall likelihood and posterior. It is shown that for the description of basic discrimination experiments in psychophysics within the BSDT a ‘neural space’ can be introduced where sensory stimuli as neural codes are represented and decision processes are defined, the BSDT’s isobias curves can simultaneously be interpreted as universal psychometric functions satisfying the Neyman-Pearson objective, the just noticeable difference (jnd) can be defined and interpreted as an atom of experience, and near-neutral values of biases are observers’ natural choice. The uniformity or no-priming hypotheses, concerning the ‘in-mind’ distribution of false-alarm probabilities during ROC or overall probability estimations, is introduced. The BSDT’s and classic SDT’s sensitivity, bias, their ROC and decision spaces are compared.
Resumo:
Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^
Resumo:
Background: Financial abuse of elders is an under acknowledged problem and professionals' judgements contribute to both the prevalence of abuse and the ability to prevent and intervene. In the absence of a definitive "gold standard" for the judgement, it is desirable to try and bring novice professionals' judgemental risk thresholds to the level of competent professionals as quickly and effectively as possible. This study aimed to test if a training intervention was able to bring novices' risk thresholds for financial abuse in line with expert opinion. Methods: A signal detection analysis, within a randomised controlled trial of an educational intervention, was undertaken to examine the effect on the ability of novices to efficiently detect financial abuse. Novices (n = 154) and experts (n = 33) judged "certainty of risk" across 43 scenarios; whether a scenario constituted a case of financial abuse or not was a function of expert opinion. Novices (n = 154) were randomised to receive either an on-line educational intervention to improve financial abuse detection (n = 78) or a control group (no on-line educational intervention, n = 76). Both groups examined 28 scenarios of abuse (11 "signal" scenarios of risk and 17 "noise" scenarios of no risk). After the intervention group had received the on-line training, both groups then examined 15 further scenarios (5 "signal" and 10 "noise" scenarios). Results: Experts were more certain than the novices, pre (Mean 70.61 vs. 58.04) and post intervention (Mean 70.84 vs. 63.04); and more consistent. The intervention group (mean 64.64) were more certain of abuse post-intervention than the control group (mean 61.41, p = 0.02). Signal detection analysis of sensitivity (Á) and bias (C) revealed that this was due to the intervention shifting the novices' tendency towards saying "at risk" (C post intervention -.34) and away from their pre intervention levels of bias (C-.12). Receiver operating curves revealed more efficient judgments in the intervention group. Conclusion: An educational intervention can improve judgements of financial abuse amongst novice professionals.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
We propose a multi-layer spectrum sensing optimisation algorithm to maximise sensing efficiency by computing the optimal sensing and transmission durations for a fast changing, dynamic primary user. Dynamic primary user traffic is modelled as a random process, where the primary user changes states during both the sensing period and transmission period to reflect a more realistic scenario. Furthermore, we formulate joint constraints to correctly reflect interference to the primary user and lost opportunity of the secondary user during the transmission period. Finally, we implement a novel duty cycle based detector that is optimised with respect to PU traffic to accurately detect primary user activity during the sensing period. Simulation results show that unlike currently used detection models, the proposed algorithm can jointly optimise the sensing and transmission durations to simultaneously satisfy the optimisation constraints for the considered primary user traffic.
Resumo:
In this paper, we consider the application of belief propagation (BP) to achieve near-optimal signal detection in large multiple-input multiple-output (MIMO) systems at low complexities. Large-MIMO architectures based on spatial multiplexing (V-BLAST) as well as non-orthogonal space-time block codes(STBC) from cyclic division algebra (CDA) are considered. We adopt graphical models based on Markov random fields (MRF) and factor graphs (FG). In the MRF based approach, we use pairwise compatibility functions although the graphical models of MIMO systems are fully/densely connected. In the FG approach, we employ a Gaussian approximation (GA) of the multi-antenna interference, which significantly reduces the complexity while achieving very good performance for large dimensions. We show that i) both MRF and FG based BP approaches exhibit large-system behavior, where increasingly closer to optimal performance is achieved with increasing number of dimensions, and ii) damping of messages/beliefs significantly improves the bit error performance.
Resumo:
We develop several novel signal detection algorithms for two-dimensional intersymbol-interference channels. The contribution of the paper is two-fold: (1) We extend the one-dimensional maximum a-posteriori (MAP) detection algorithm to operate over multiple rows and columns in an iterative manner. We study the performance vs. complexity trade-offs for various algorithmic options ranging from single row/column non-iterative detection to a multi-row/column iterative scheme and analyze the performance of the algorithm. (2) We develop a self-iterating 2-D linear minimum mean-squared based equalizer by extending the 1-D linear equalizer framework, and present an analysis of the algorithm. The iterative multi-row/column detector and the self-iterating equalizer are further connected together within a turbo framework. We analyze the combined 2-D iterative equalization and detection engine through analysis and simulations. The performance of the overall equalizer and detector is near MAP estimate with tractable complexity, and beats the Marrow Wolf detector by about at least 0.8 dB over certain 2-D ISI channels. The coded performance indicates about 8 dB of significant SNR gain over the uncoded 2-D equalizer-detector system.
Resumo:
In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).
Resumo:
Two-dimensional magnetic recording 2-D (TDMR) is a promising technology for next generation magnetic storage systems based on a systems-level framework involving sophisticated signal processing at the core. The TDMR channel suffers from severe jitter noise along with electronic noise that needs to be mitigated during signal detection and recovery. Recently, we developed noise prediction-based techniques coupled with advanced signal detectors to work with these systems. However, it is important to understand the role of harmful patterns that can be avoided during the encoding process. In this paper, we investigate the Voronoi-based media model to study the harmful patterns over multitrack shingled recording systems. Through realistic quasi-micromagnetic simulation studies, we identify 2-D data patterns that contribute to high media noise. We look into the generic Voronoi model and present our analysis on multitrack detection with constrained coded data. We show that the 2-D constraints imposed on input patterns result in an order of magnitude improvement in the bit-error rate for the TDMR systems. The use of constrained codes can reduce the complexity of 2-D intersymbol interference (ISI) signal detection, since the lesser 2-D ISI span can be accommodated at the cost of a nominal code rate loss. However, a system must be designed carefully so that the rate loss incurred by a 2-D constraint does not offset the detector performance gain due to more distinguishable readback signals.
Resumo:
Up-converting phosphor technology (UPT)-based lateral-flow immunoassay has been developed for quantitative detection of Yersinia pestis rapidly and specifically. In this assay, 400 nm up-converting phosphor particles were used as the reporter. A sandwich immumoassay was employed by using a polyclonal antibody against F1 antigen of Y. pestis immobilized on the nitrocellulose membrane and the same antibody conjugated to the UPT particles. The signal detection of the strips was performed by the UPT-based biosensor that could provide a 980 nm IR laser to excite the phosphor particles, then collect the visible luminescence emitted by the UPT particles and finally convert it to the voltage as a signal. V-T and V-c stand for the multiplied voltage units for the test and the control line, respectively, and the ratio V-T/V-C is directly proportional to the number of Y pestis in a sample. We observed a good linearity between the ratio and log CFU/ml of Y pestis above the detection limit, which was approximately 10(4) CFU/mI. The precision of the intra- and inter-assay was below 15% (coefficient of variation, CV). Cross-reactivity with related Gram-negative enteric bacteria was not found. The UPT-LF immunoassay system presented here takes less than 30 min to perform from the sample treatment to the data analysis. The current paper includes only preliminary data concerning the biomedical aspects of the assay, but is more concentrated on the technical details of establishing a rapid manual assay using a state-of-the-art label chemistry. (c) 2006 Elsevier B.V. All rights reserved.