973 resultados para Detection Probability
Resumo:
Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.
Resumo:
Mass occurrences (blooms) of cyanobacteria are common in aquatic environments worldwide. These blooms are often toxic, due to the presence of hepatotoxins or neurotoxins. The most common cyanobacterial toxins are hepatotoxins: microcystins and nodularins. In freshwaters, the main producers of microcystins are Microcystis, Anabaena, and Planktothrix. Nodularins are produced by strains of Nodularia spumigena in brackish waters. Toxic and nontoxic strains of cyanobacteria co-occur and cannot be differentiated by conventional microscopy. Molecular biological methods based on microcystin and nodularin synthetase genes enable detection of potentially hepatotoxic cyanobacteria. In the present study, molecular detection methods for hepatotoxin-producing cyanobacteria were developed, based on microcystin synthetase gene E (mcyE) and the orthologous nodularin synthetase gene F (ndaF) sequences. General primers were designed to amplify the mcyE/ndaF gene region from microcystin-producing Anabaena, Microcystis, Planktothrix, and Nostoc, and nodularin-producing Nodularia strains. The sequences were used for phylogenetic analyses to study how cyanobacterial mcy genes have evolved. The results showed that mcy genes and microcystin are very old and were already present in the ancestor of many modern cyanobacterial genera. The results also suggested that the sporadic distribution of biosynthetic genes in modern cyanobacteria is caused by repeated gene losses in the more derived lineages of cyanobacteria and not by horizontal gene transfer. Phylogenetic analysis also proposed that nda genes evolved from mcy genes. The frequency and composition of the microcystin producers in 70 lakes in Finland were studied by conventional polymerase chain reaction (PCR). Potential microcystin producers were detected in 84% of the lakes, using general mcyE primers, and in 91% of the lakes with the three genus-specific mcyE primers. Potential microcystin-producing Microcystis were detected in 70%, Planktothrix in 63%, and Anabaena in 37% of the lakes. The presence and co-occurrence of potential microcystin producers were more frequent in eutrophic lakes, where the total phosphorus concentration was high. The PCR results could also be associated with various environmental factors by correlation and regression analyses. In these analyses, the total nitrogen concentration and pH were both associated with the presence of multiple microcystin-producing genera and partly explained the probability of occurrence of mcyE genes. In general, the results showed that higher nutrient concentrations increased the occurrence of potential microcystin producers and the risk for toxic bloom formation. Genus-specific probe pairs for microcystin-producing Anabaena, Microcystis, Planktothrix, and Nostoc, and nodularin-producing Nodularia were designed to be used in a DNA-chip assay. The DNA-chip can be used to simultaneously detect all these potential microcystin/nodularin producers in environmental water samples. The probe pairs detected the mcyE/ndaF genes specifically and sensitively when tested with cyanobacterial strains. In addition, potential microcystin/nodularin producers were identified in lake and Baltic Sea samples by the DNA-chip almost as sensitively as by quantitative real-time PCR (qPCR), which was used to validate the DNA-chip results. Further improvement of the DNA-chip assay was achieved by optimization of the PCR, the first step in the assay. Analysis of the mcy and nda gene clusters from various hepatotoxin-producing cyanobacteria was rewarding; it revealed that the genes were ancient. In addition, new methods detecting all the main producers of hepatotoxins could be developed. Interestingly, potential microcystin-producing cyanobacterial strains of Microcystis, Planktothrix, and Anabaena, co-occurred especially in eutrophic and hypertrophic lakes. Protecting waters from eutrophication and restoration of lakes may thus decrease the prevalence of toxic cyanobacteria and the frequency of toxic blooms.
Resumo:
We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices, we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: (1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, (2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which (3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time), based on the sensor observations obtained until time slot k. Our results show that an optimum closed loop control on Mk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.
Resumo:
Recently, we reported a low-complexity likelihood ascent search (LAS) detection algorithm for large MIMO systems with several tens of antennas that can achieve high spectral efficiencies of the order of tens to hundreds of bps/Hz. Through simulations, we showed that this algorithm achieves increasingly near SISO AWGN performance for increasing number of antennas in Lid. Rayleigh fading. However, no bit error performance analysis of the algorithm was reported. In this paper, we extend our work on this low-complexity large MIMO detector in two directions: i) We report an asymptotic bit error probability analysis of the LAS algorithm in the large system limit, where N-t, N-r -> infinity keeping N-t = N-r, where N-t and N-r are the number of transmit and receive antennas, respectively. Specifically, we prove that the error performance of the LAS detector for V-BLAST with 4-QAM in i.i.d. Rayleigh fading converges to that of the maximum-likelihood (ML) detector as N-t, N-r -> infinity keeping N-t = N-r ii) We present simulated BER and nearness to capacity results for V-BLAST as well as high-rate non-orthogonal STBC from Division Algebras (DA), in a more realistic spatially correlated MIMO channel model. Our simulation results show that a) at an uncoded BER of 10(-3), the performance of the LAS detector in decoding 16 x 16 STBC from DA with N-t = = 16 and 16-QAM degrades in spatially correlated fading by about 7 dB compared to that in i.i.d. fading, and 19) with a rate-3/4 outer turbo code and 48 bps/Hz spectral efficiency, the performance degrades by about 6 dB at a coded BER of 10(-4). Our results further show that providing asymmetry in number of antennas such that N-r > N-t keeping the total receiver array length same as that for N-r = N-t, the detector is able to pick up the extra receive diversity thereby significantly improving the BER performance.
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
In this article we consider a finite queue with its arrivals controlled by the random early detection algorithm. This is one of the most prominent congestion avoidance schemes in the Internet routers. The aggregate arrival stream from the population of transmission control protocol sources is locally considered stationary renewal or Markov modulated Poisson process with general packet length distribution. We study the exact dynamics of this queue and provide the stability and the rates of convergence to the stationary distribution and obtain the packet loss probability and the waiting time distribution. Then we extend these results to a two traffic class case with each arrival stream renewal. However, computing the performance indices for this system becomes computationally prohibitive. Thus, in the latter half of the article, we approximate the dynamics of the average queue length process asymptotically via an ordinary differential equation. We estimate the error term via a diffusion approximation. We use these results to obtain approximate transient and stationary performance of the system. Finally, we provide some computational examples to show the accuracy of these approximations.
Resumo:
We consider the problem of quickest detection of an intrusion using a sensor network, keeping only a minimal number of sensors active. By using a minimal number of sensor devices,we ensure that the energy expenditure for sensing, computation and communication is minimized (and the lifetime of the network is maximized). We model the intrusion detection (or change detection) problem as a Markov decision process (MDP). Based on the theory of MDP, we develop the following closed loop sleep/wake scheduling algorithms: 1) optimal control of Mk+1, the number of sensors in the wake state in time slot k + 1, 2) optimal control of qk+1, the probability of a sensor in the wake state in time slot k + 1, and an open loop sleep/wake scheduling algorithm which 3) computes q, the optimal probability of a sensor in the wake state (which does not vary with time),based on the sensor observations obtained until time slot k.Our results show that an optimum closed loop control onMk+1 significantly decreases the cost compared to keeping any number of sensors active all the time. Also, among the three algorithms described, we observe that the total cost is minimum for the optimum control on Mk+1 and is maximum for the optimum open loop control on q.
Resumo:
The paper outlines a technique for sensitive measurement of conduction phenomena in liquid dielectrics. The special features of this technique are the simplicity of the electrical system, the inexpensive instrumentation and the high accuracy. Detection, separation and analysis of a random function of current that is superimposed on the prebreakdown direct current forms the basis of this investigation. In this case, prebreakdown direct current is the output data of a test cell with large electrodes immersed in a liquid medium subjected to high direct voltages. Measurement of the probability-distribution function of a random fluctuating component of current provides a method that gives insight into the mechanism of conduction in a liquid medium subjected to high voltages and the processes that are responsible for the existence of the fluctuating component of the current.
Resumo:
We consider a small extent sensor network for event detection, in which nodes periodically take samples and then contend over a random access network to transmit their measurement packets to the fusion center. We consider two procedures at the fusion center for processing the measurements. The Bayesian setting, is assumed, that is, the fusion center has a prior distribution on the change time. In the first procedure, the decision algorithm at the fusion center is network-oblivious and makes a decision only when a complete vector of measurements taken at a sampling instant is available. In the second procedure, the decision algorithm at the fusion center is network-aware and processes measurements as they arrive, but in a time-causal order. In this case, the decision statistic depends on the network delays, whereas in the network-oblivious case, the decision statistic does not. This yields a Bayesian change-detection problem with a trade-off between the random network delay and the decision delay that is, a higher sampling rate reduces the decision delay but increases the random access delay. Under periodic sampling, in the network-oblivious case, the structure of the optimal stopping rule is the same as that without the network, and the optimal change detection delay decouples into the network delay and the optimal decision delay without the network. In the network-aware case, the optimal stopping problem is analyzed as a partially observable Markov decision process, in which the states of the queues and delays in the network need to be maintained. A sufficient decision statistic is the network state and the posterior probability of change having occurred, given the measurements received and the state of the network. The optimal regimes are studied using simulation.
Resumo:
Efficient photon detection in gaseous photomultipliers require maximum photoelectron yield from the photocathode surface and also detection of them. In this work we have investigated the parameters that affect the photoelectron yield from the photocathode surface and methods to improve them thus ensuring high detection efficiency of the gaseous photomultiplier. The parameters studied are the electric field at the photocathode surface, surface properties of photocathode and pressure of gas mixture inside the gaseous photomultiplier. It was observed that optimized electric field at the photocathode ensures high detection efficiency. Lower pressure of filled gas increases the photoelectron yield from the photocathode surface but reduces the focusing probability of electrons inside the electron multiplier. Also evacuation for longer duration before gas filling increases the photoelectron yield.
Resumo:
This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors.
Resumo:
In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.
Resumo:
This paper considers cooperative spectrum sensing algorithms for Cognitive Radios which focus on reducing the number of samples to make a reliable detection. We propose algorithms based on decentralized sequential hypothesis testing in which the Cognitive Radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing to make a final decision on spectrum usage. The reporting channel between the Cognitive Radios and the fusion center is assumed more realistically as a Multiple Access Channel (MAC) with receiver noise. Furthermore the communication for reporting is limited, thereby reducing the communication cost. We start with an algorithm where the fusion center uses an SPRT-like (Sequential Probability Ratio Test) procedure and theoretically analyze its performance. Asymptotically, its performance is close to the optimal centralized test without fusion center noise. We further modify this algorithm to improve its performance at practical operating points. Later we generalize these algorithms to handle uncertainties in SNR and fading. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We consider a quantum particle, moving on a lattice with a tight-binding Hamiltonian, which is subjected to measurements to detect its arrival at a particular chosen set of sites. The projective measurements are made at regular time intervals tau, and we consider the evolution of the wave function until the time a detection occurs. We study the probabilities of its first detection at some time and, conversely, the probability of it not being detected (i.e., surviving) up to that time. We propose a general perturbative approach for understanding the dynamics which maps the evolution operator, which consists of unitary transformations followed by projections, to one described by a non-Hermitian Hamiltonian. For some examples of a particle moving on one-and two-dimensional lattices with one or more detection sites, we use this approach to find exact expressions for the survival probability and find excellent agreement with direct numerical results. A mean-field model with hopping between all pairs of sites and detection at one site is solved exactly. For the one-and two-dimensional systems, the survival probability is shown to have a power-law decay with time, where the power depends on the initial position of the particle. Finally, we show an interesting and nontrivial connection between the dynamics of the particle in our model and the evolution of a particle under a non-Hermitian Hamiltonian with a large absorbing potential at some sites.
Resumo:
We consider carrier frequency offset (CFO) estimation in the context of multiple-input multiple-output (MIMO) orthogonal frequency-division multiplexing (OFDM) systems over noisy frequency-selective wireless channels with both single- and multiuser scenarios. We conceived a new approach for parameter estimation by discretizing the continuous-valued CFO parameter into a discrete set of bins and then invoked detection theory, analogous to the minimum-bit-error-ratio optimization framework for detecting the finite-alphabet received signal. Using this radical approach, we propose a novel CFO estimation method and study its performance using both analytical results and Monte Carlo simulations. We obtain expressions for the variance of the CFO estimation error and the resultant BER degradation with the single- user scenario. Our simulations demonstrate that the overall BER performance of a MIMO-OFDM system using the proposed method is substantially improved for all the modulation schemes considered, albeit this is achieved at increased complexity.