74 resultados para Likelihood Ratio
em Indian Institute of Science - Bangalore - Índia
Resumo:
Merton's model views equity as a call option on the asset of the firm. Thus the asset is partially observed through the equity. Then using nonlinear filtering an explicit expression for likelihood ratio for underlying parameters in terms of the nonlinear filter is obtained. As the evolution of the filter itself depends on the parameters in question, this does not permit direct maximum likelihood estimation, but does pave the way for the `Expectation-Maximization' method for estimating parameters. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper considers the problem of spectrum sensing, i.e., the detection of whether or not a primary user is transmitting data by a cognitive radio. The Bayesian framework is adopted, with the performance measure being the probability of detection error. A decentralized setup, where N sensors use M observations each to arrive at individual decisions that are combined at a fusion center to form the overall decision is considered. The unknown fading channel between the primary sensor and the cognitive radios makes the individual decision rule computationally complex, hence, a generalized likelihood ratio test (GLRT)-based approach is adopted. Analysis of the probabilities of false alarm and miss detection of the proposed method reveals that the error exponent with respect to M is zero. Also, the fusion of N individual decisions offers a diversity advantage, similar to diversity reception in communication systems, and a tight bound on the error exponent is presented. Through an analysis in the low power regime, the number of observations needed as a function of received power, to achieve a given probability of error is determined. Monte-Carlo simulations confirm the accuracy of the analysis.
Resumo:
Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on the regular PEG (504,252,3,6) code and the irregular PEG (1024,518,15,8) code shows that the proposed two stage decoder performs significantly better than the standard decoder.
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors.
Resumo:
Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.
Resumo:
Speech enhancement in stationary noise is addressed using the ideal channel selection framework. In order to estimate the binary mask, we propose to classify each time-frequency (T-F) bin of the noisy signal as speech or noise using Discriminative Random Fields (DRF). The DRF function contains two terms - an enhancement function and a smoothing term. On each T-F bin, we propose to use an enhancement function based on likelihood ratio test for speech presence, while Ising model is used as smoothing function for spectro-temporal continuity in the estimated binary mask. The effect of the smoothing function over successive iterations is found to reduce musical noise as opposed to using only enhancement function. The binary mask is inferred from the noisy signal using Iterated Conditional Modes (ICM) algorithm. Sentences from NOIZEUS corpus are evaluated from 0 dB to 15 dB Signal to Noise Ratio (SNR) in 4 kinds of additive noise settings: additive white Gaussian noise, car noise, street noise and pink noise. The reconstructed speech using the proposed technique is evaluated in terms of average segmental SNR, Perceptual Evaluation of Speech Quality (PESQ) and Mean opinion Score (MOS).
Resumo:
We propose a molecular mechanism for the intra-cellular measurement of the ratio of the number of X chromosomes to the number of sets of autosomes, a process central to both sex determination and dosage compensation in Drosophila melanogaster. In addition to the two loci, da and Sxl, which have been shown by Cline (Genetics, 90, 683, 1978)and others to be involved in these processes, we postulate two other loci, one autosomal (ω) and the other, X-linked (π). The product of the autosomal locus da stimulates ω and initiates synthesis of a limited quantity of repressor. Sxl and π ,both of which are X-linked, compete for this repressor as well as for RNA polymerase. It is assumed that Sxl has lower affinity than π for repressor as well as polymerase and that the binding of polymerase to one of these sites modulates the binding affinity of the other site for the enzyme. It can be shown that as a result of these postulated interactions transcription from the Sxl site is proportional to the X/A ratio such that the levels of Sxl+ product are low in males, high in females and intermediate in the intersexes. If, as proposed by Cline, the Sxl- product is an inhibitor of X chromosome activity, this would result in dosage compensation. The model leads to the conclusion that high levels of Sxl+ product promote a female phenotype and low levels, a male phenotype. One interesting consequence of the assumptions on which the model is based is that the level of Sxl+ product in the cell, when examined as a function of increasing repressor concentration, first goes up and then decreases, yielding a bell-shaped curve. This feature of the model provides an explanation for some of the remarkable interactions among mutants at the Sxl, da and mle loci and leads to several predictions. The proposed mechanism may also have relevance to certain other problems, such as size regulation during development, which seem to involve measurement of ratios at the cellular level.
Resumo:
The transfer matrix method is known to be well suited for a complete analysis of a lumped as well as distributed element, one-dimensional, linear dynamical system with a marked chain topology. However, general subroutines of the type available for classical matrix methods are not available in the current literature on transfer matrix methods. In the present article, general expressions for various aspects of analysis-viz., natural frequency equation, modal vectors, forced response and filter performance—have been evaluated in terms of a single parameter, referred to as velocity ratio. Subprograms have been developed for use with the transfer matrix method for the evaluation of velocity ratio and related parameters. It is shown that a given system, branched or straight-through, can be completely analysed in terms of these basic subprograms, on a stored program digital computer. It is observed that the transfer matrix method with the velocity ratio approach has certain advantages over the existing general matrix methods in the analysis of one-dimensional systems.
Resumo:
The ratio of diffusion coefficient to mobility (D/¿) for electrons has been measured in SF6-air and freon-nitrogen mixtures for various concentrations of SF6 and freon in the mixtures over the range 140¿ E/p¿ 220 V.cm-1 - torr-1. In SF6-air mixtures, the values of D/¿ were always observed to lie intermediate between the values for the pure gases. However, in freon-nitrogen mixtures, with a small concentration (10 percent) of freon in the mixture, the values of D/¿ are found to lie above the boundaries determined by the pure gases. In this mixture, over the lower E/p range (140 to 190) the electrons appear to lose a large fraction of their energy by the excitation of the complex freon molecules, while at higher E/p values (200 to 240), the excitation and consequent deexcitation of nitrogen molecules and its metastables seem to cause an increased rate of ionization of freon molecules.
Resumo:
The paper deals with a method for the evaluation of exhaust muffers with mean flow. A new set of variables, convective pressure and convective mass velocity, have been defined to replace the acoustic variables. An expression for attenuation (insertion loss) of a muffler has been proposed in terms of convective terminal impedances and a velocity ratio, on the lines of the one existing for acoustic filters. In order to evaluate the velocity ratio in terms of convective variables, transfer matrices for various muffler elements have been derived from the basic relations of energy, mass and momentum. Finally, the velocity ratiocum-transfer matrix method is illustrated for a typical straight-through muffler.
Resumo:
"Extended Clifford algebras" are introduced as a means to obtain low ML decoding complexity space-time block codes. Using left regular matrix representations of two specific classes of extended Clifford algebras, two systematic algebraic constructions of full diversity Distributed Space-Time Codes (DSTCs) are provided for any power of two number of relays. The left regular matrix representation has been shown to naturally result in space-time codes meeting the additional constraints required for DSTCs. The DSTCs so constructed have the salient feature of reduced Maximum Likelihood (ML) decoding complexity. In particular, the ML decoding of these codes can be performed by applying the lattice decoder algorithm on a lattice of four times lesser dimension than what is required in general. Moreover these codes have a uniform distribution of power among the relays and in time, thus leading to a low Peak to Average Power Ratio at the relays.