956 resultados para Wald’s sequential probability ratio test


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Researchers typically tackle questions by constructing powerful, highlyreplicated sampling protocols or experimental designs. Such approaches often demand large samples sizes and are usually only conducted on a once-off basis. In contrast, many industries need to continually monitor phenomena such as equipment reliability, water quality, or the abundance of a pest. In such instances, costs and time inherent in sampling preclude the use of highlyintensive methods. Ideally, one wants to collect the absolute minimum number of samples needed to make an appropriate decision. Sequential sampling, wherein the sample size is a function of the results of the sampling process itself, offers a practicable solution. But smaller sample sizes equate to less knowledge about the population, and thus an increased risk of making an incorrect management decision. There are various statistical techniques to account for and measure risk in sequential sampling plans. We illustrate these methods and assess them using examples relating to the management of arthropod pests in commercial crops, but they can be applied to any situation where sequential sampling is used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers cooperative spectrum sensing in Cognitive Radios. In our previous work we have developed DualSPRT, a distributed algorithm for cooperative spectrum sensing using Sequential Probability Ratio Test (SPRT) at the Cognitive Radios as well as at the fusion center. This algorithm works well, but is not optimal. In this paper we propose an improved algorithm- SPRT-CSPRT, which is motivated from Cumulative Sum Procedures (CUSUM). We analyse it theoretically. We also modify this algorithm to handle uncertainties in SNR's and fading.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers cooperative spectrum sensing algorithms for Cognitive Radios which focus on reducing the number of samples to make a reliable detection. We propose algorithms based on decentralized sequential hypothesis testing in which the Cognitive Radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing to make a final decision on spectrum usage. The reporting channel between the Cognitive Radios and the fusion center is assumed more realistically as a Multiple Access Channel (MAC) with receiver noise. Furthermore the communication for reporting is limited, thereby reducing the communication cost. We start with an algorithm where the fusion center uses an SPRT-like (Sequential Probability Ratio Test) procedure and theoretically analyze its performance. Asymptotically, its performance is close to the optimal centralized test without fusion center noise. We further modify this algorithm to improve its performance at practical operating points. Later we generalize these algorithms to handle uncertainties in SNR and fading. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les quatre principales activités de la gestion de risque thérapeutique comportent l’identification, l’évaluation, la minimisation, et la communication du risque. Ce mémoire aborde les problématiques liées à l’identification et à la minimisation du risque par la réalisation de deux études dont les objectifs sont de: 1) Développer et valider un outil de « data mining » pour la détection des signaux à partir des banques de données de soins de santé du Québec; 2) Effectuer une revue systématique afin de caractériser les interventions de minimisation de risque (IMR) ayant été implantées. L’outil de détection de signaux repose sur la méthode analytique du quotient séquentiel de probabilité (MaxSPRT) en utilisant des données de médicaments délivrés et de soins médicaux recueillis dans une cohorte rétrospective de 87 389 personnes âgées vivant à domicile et membres du régime d’assurance maladie du Québec entre les années 2000 et 2009. Quatre associations « médicament-événement indésirable (EI) » connues et deux contrôles « négatifs » ont été utilisés. La revue systématique a été faite à partir d’une revue de la littérature ainsi que des sites web de six principales agences réglementaires. La nature des RMIs ont été décrites et des lacunes de leur implémentation ont été soulevées. La méthode analytique a mené à la détection de signaux dans l'une des quatre combinaisons médicament-EI. Les principales contributions sont: a) Le premier outil de détection de signaux à partir des banques de données administratives canadiennes; b) Contributions méthodologiques par la prise en compte de l'effet de déplétion des sujets à risque et le contrôle pour l'état de santé du patient. La revue a identifié 119 IMRs dans la littérature et 1,112 IMRs dans les sites web des agences réglementaires. La revue a démontré qu’il existe une augmentation des IMRs depuis l’introduction des guides réglementaires en 2005 mais leur efficacité demeure peu démontrée.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Para implantar o manejo apropriado do curuquerê-do-algodoeiro, é necessário construir um plano de amostragem que permita estimar, de forma rápida e precisa, a densidade populacional da praga. Esta pesquisa objetivou determinar o plano de amostragem seqüencial de Alabama argillacea (Hübner) em algodoeiro, cultivar CNPA ITA-90. Os dados foram coletados no ano agrícola de 1998/99 na Fazenda Itamarati Sul S/A, localizada no município de Ponta Porã, MS, em três áreas de 10.000 m² cada. As áreas amostrais foram compostas de 100 parcelas de 100 m². O número de lagartas pequenas, médias e grandes foi determinado semanalmente em cinco plantas tomadas ao acaso por parcela. Após verificado que todos os instares das lagartas estavam distribuídos de acordo com o modelo de distribuição agregada, ajustando-se à Distribuição Binomial Negativa durante todo o período de infestação, construiu-se um plano de amostragem seqüencial de acordo com o Teste Seqüencial da Razão de Probabilidade (TSRP). Adotou-se o nível de controle de duas lagartas por planta para a construção do plano de amostragem. A análise dos dados indicou duas linhas de decisão: a superior, que representa a condição de que a adoção de um método de controle é recomendado, definida por S1= 4,8784+1,4227n; e a inferior representando que a adoção de algum método de controle não é necessário, definida por S0= -4,8784+1,4227n. A amostragem seqüencial estimou o número máximo esperado de 16 unidades amostrais para se definir a necessidade ou não do controle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integer ambiguity resolution is an indispensable procedure for all high precision GNSS applications. The correctness of the estimated integer ambiguities is the key to achieving highly reliable positioning, but the solution cannot be validated with classical hypothesis testing methods. The integer aperture estimation theory unifies all existing ambiguity validation tests and provides a new prospective to review existing methods, which enables us to have a better understanding on the ambiguity validation problem. This contribution analyses two simple but efficient ambiguity validation test methods, ratio test and difference test, from three aspects: acceptance region, probability basis and numerical results. The major contribution of this paper can be summarized as: (1) The ratio test acceptance region is overlap of ellipsoids while the difference test acceptance region is overlap of half-spaces. (2) The probability basis of these two popular tests is firstly analyzed. The difference test is an approximation to optimal integer aperture, while the ratio test follows an exponential relationship in probability. (3) The limitations of the two tests are firstly identified. The two tests may under-evaluate the failure risk if the model is not strong enough or the float ambiguities fall in particular region. (4) Extensive numerical results are used to compare the performance of these two tests. The simulation results show the ratio test outperforms the difference test in some models while difference test performs better in other models. Particularly in the medium baseline kinematic model, the difference tests outperforms the ratio test, the superiority is independent on frequency number, observation noise, satellite geometry, while it depends on success rate and failure rate tolerance. Smaller failure rate leads to larger performance discrepancy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we deal with the issue of performing accurate testing inference on a scalar parameter of interest in structural errors-in-variables models. The error terms are allowed to follow a multivariate distribution in the class of the elliptical distributions, which has the multivariate normal distribution as special case. We derive a modified signed likelihood ratio statistic that follows a standard normal distribution with a high degree of accuracy. Our Monte Carlo results show that the modified test is much less size distorted than its unmodified counterpart. An application is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Classifier selection is a problem encountered by multi-biometric systems that aim to improve performance through fusion of decisions. A particular decision fusion architecture that combines multiple instances (n classifiers) and multiple samples (m attempts at each classifier) has been proposed in previous work to achieve controlled trade-off between false alarms and false rejects. Although analysis on text-dependent speaker verification has demonstrated better performance for fusion of decisions with favourable dependence compared to statistically independent decisions, the performance is not always optimal. Given a pool of instances, best performance with this architecture is obtained for certain combination of instances. Heuristic rules and diversity measures have been commonly used for classifier selection but it is shown that optimal performance is achieved for the `best combination performance' rule. As the search complexity for this rule increases exponentially with the addition of classifiers, a measure - the sequential error ratio (SER) - is proposed in this work that is specifically adapted to the characteristics of sequential fusion architecture. The proposed measure can be used to select a classifier that is most likely to produce a correct decision at each stage. Error rates for fusion of text-dependent HMM based speaker models using SER are compared with other classifier selection methodologies. SER is shown to achieve near optimal performance for sequential fusion of multiple instances with or without the use of multiple samples. The methodology applies to multiple speech utterances for telephone or internet based access control and to other systems such as multiple finger print and multiple handwriting sample based identity verification systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops a general method for constructing similar tests based on the conditional distribution of nonpivotal statistics in a simultaneous equations model with normal errors and known reducedform covariance matrix. The test based on the likelihood ratio statistic is particularly simple and has good power properties. When identification is strong, the power curve of this conditional likelihood ratio test is essentially equal to the power envelope for similar tests. Monte Carlo simulations also suggest that this test dominates the Anderson- Rubin test and the score test. Dropping the restrictive assumption of disturbances normally distributed with known covariance matrix, approximate conditional tests are found that behave well in small samples even when identification is weak.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)