918 resultados para cross likelihood ratio


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the thesis we consider inference for cointegration in vector autoregressive (VAR) models. The thesis consists of an introduction and four papers. The first paper proposes a new test for cointegration in VAR models that is directly based on the eigenvalues of the least squares (LS) estimate of the autoregressive matrix. In the second paper we compare a small sample correction for the likelihood ratio (LR) test of cointegrating rank and the bootstrap. The simulation experiments show that the bootstrap works very well in practice and dominates the correction factor. The tests are applied to international stock prices data, and the .nite sample performance of the tests are investigated by simulating the data. The third paper studies the demand for money in Sweden 1970—2000 using the I(2) model. In the fourth paper we re-examine the evidence of cointegration between international stock prices. The paper shows that some of the previous empirical results can be explained by the small-sample bias and size distortion of Johansen’s LR tests for cointegration. In all papers we work with two data sets. The first data set is a Swedish money demand data set with observations on the money stock, the consumer price index, gross domestic product (GDP), the short-term interest rate and the long-term interest rate. The data are quarterly and the sample period is 1970(1)—2000(1). The second data set consists of month-end stock market index observations for Finland, France, Germany, Sweden, the United Kingdom and the United States from 1980(1) to 1997(2). Both data sets are typical of the sample sizes encountered in economic data, and the applications illustrate the usefulness of the models and tests discussed in the thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The likelihood ratio test of cointegration rank is the most widely used test for cointegration. Many studies have shown that its finite sample distribution is not well approximated by the limiting distribution. The article introduces and evaluates by Monte Carlo simulation experiments bootstrap and fast double bootstrap (FDB) algorithms for the likelihood ratio test. It finds that the performance of the bootstrap test is very good. The more sophisticated FDB produces a further improvement in cases where the performance of the asymptotic test is very unsatisfactory and the ordinary bootstrap does not work as well as it might. Furthermore, the Monte Carlo simulations provide a number of guidelines on when the bootstrap and FDB tests can be expected to work well. Finally, the tests are applied to US interest rates and international stock prices series. It is found that the asymptotic test tends to overestimate the cointegration rank, while the bootstrap and FDB tests choose the correct cointegration rank.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bootstrap likelihood ratio tests of cointegration rank are commonly used because they tend to have rejection probabilities that are closer to the nominal level than the rejection probabilities of the correspond- ing asymptotic tests. The e¤ect of bootstrapping the test on its power is largely unknown. We show that a new computationally inexpensive procedure can be applied to the estimation of the power function of the bootstrap test of cointegration rank. The bootstrap test is found to have a power function close to that of the level-adjusted asymp- totic test. The bootstrap test estimates the level-adjusted power of the asymptotic test highly accurately. The bootstrap test may have low power to reject the null hypothesis of cointegration rank zero, or underestimate the cointegration rank. An empirical application to Euribor interest rates is provided as an illustration of the findings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many economic events involve initial observations that substantially deviate from long-run steady state. Initial conditions of this type have been found to impact diversely on the power of univariate unit root tests, whereas the impact on multivariate tests is largely unknown. This paper investigates the impact of the initial condition on tests for cointegration rank. We compare the local power of the widely used likelihood ratio (LR) test with the local power of a test based on the eigenvalues of the companion matrix. We find that the power of the LR test is increasing in the magnitude of the initial condition, whereas the power of the other test is decreasing. The behaviour of the tests is investigated in an application to price convergence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Merton's model views equity as a call option on the asset of the firm. Thus the asset is partially observed through the equity. Then using nonlinear filtering an explicit expression for likelihood ratio for underlying parameters in terms of the nonlinear filter is obtained. As the evolution of the filter itself depends on the parameters in question, this does not permit direct maximum likelihood estimation, but does pave the way for the `Expectation-Maximization' method for estimating parameters. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the problem of spectrum sensing, i.e., the detection of whether or not a primary user is transmitting data by a cognitive radio. The Bayesian framework is adopted, with the performance measure being the probability of detection error. A decentralized setup, where N sensors use M observations each to arrive at individual decisions that are combined at a fusion center to form the overall decision is considered. The unknown fading channel between the primary sensor and the cognitive radios makes the individual decision rule computationally complex, hence, a generalized likelihood ratio test (GLRT)-based approach is adopted. Analysis of the probabilities of false alarm and miss detection of the proposed method reveals that the error exponent with respect to M is zero. Also, the fusion of N individual decisions offers a diversity advantage, similar to diversity reception in communication systems, and a tight bound on the error exponent is presented. Through an analysis in the low power regime, the number of observations needed as a function of received power, to achieve a given probability of error is determined. Monte-Carlo simulations confirm the accuracy of the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Low density parity-check (LDPC) codes are a class of linear block codes that are decoded by running belief propagation (BP) algorithm or log-likelihood ratio belief propagation (LLR-BP) over the factor graph of the code. One of the disadvantages of LDPC codes is the onset of an error floor at high values of signal to noise ratio caused by trapping sets. In this paper, we propose a two stage decoder to deal with different types of trapping sets. Oscillating trapping sets are taken care by the first stage of the decoder and the elementary trapping sets are handled by the second stage of the decoder. Simulation results on the regular PEG (504,252,3,6) code and the irregular PEG (1024,518,15,8) code shows that the proposed two stage decoder performs significantly better than the standard decoder.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the formulation and performance analysis of four techniques for detection of a narrowband acoustic source in a shallow range-independent ocean using an acoustic vector sensor (AVS) array. The array signal vector is not known due to the unknown location of the source. Hence all detectors are based on a generalized likelihood ratio test (GLRT) which involves estimation of the array signal vector. One non-parametric and three parametric (model-based) signal estimators are presented. It is shown that there is a strong correlation between the detector performance and the mean-square signal estimation error. Theoretical expressions for probability of false alarm and probability of detection are derived for all the detectors, and the theoretical predictions are compared with simulation results. It is shown that the detection performance of an AVS array with a certain number of sensors is equal to or slightly better than that of a conventional acoustic pressure sensor array with thrice as many sensors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Speech enhancement in stationary noise is addressed using the ideal channel selection framework. In order to estimate the binary mask, we propose to classify each time-frequency (T-F) bin of the noisy signal as speech or noise using Discriminative Random Fields (DRF). The DRF function contains two terms - an enhancement function and a smoothing term. On each T-F bin, we propose to use an enhancement function based on likelihood ratio test for speech presence, while Ising model is used as smoothing function for spectro-temporal continuity in the estimated binary mask. The effect of the smoothing function over successive iterations is found to reduce musical noise as opposed to using only enhancement function. The binary mask is inferred from the noisy signal using Iterated Conditional Modes (ICM) algorithm. Sentences from NOIZEUS corpus are evaluated from 0 dB to 15 dB Signal to Noise Ratio (SNR) in 4 kinds of additive noise settings: additive white Gaussian noise, car noise, street noise and pink noise. The reconstructed speech using the proposed technique is evaluated in terms of average segmental SNR, Perceptual Evaluation of Speech Quality (PESQ) and Mean opinion Score (MOS).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.

In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.

The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.

The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fish growth is commonly estimated from length-at-age data obtained from otoliths. There are several techniques for estimating length-at-age from otoliths including 1) direct observed counts of annual increments; 2) age adjustment based on a categorization of otolith margins; 3) age adjustment based on known periods of spawning and annuli formation; 4) back-calculation to all annuli, and 5) back-calculation to the last annulus only. In this study we compared growth estimates (von Bertalanffy growth functions) obtained from the above five methods for estimating length-at-age from otoliths for two large scombrids: narrow-barred Spanish mackerel (Scomberomorus commerson) and broad-barred king mackerel (Scomberomorus semifasciatus). Likelihood ratio tests revealed that the largest differences in growth occurred between the back-calculation methods and the observed and adjusted methods for both species of mackerel. The pattern, however, was more pronounced for S. commerson than for S. semifasciatus, because of the pronounced effect of gear selectivity demonstrated for S. commerson. We propose a method of substituting length-at-age data from observed or adjusted methods with back-calculated length-at-age data to provide more appropriate estimates of population growth than those obtained with the individual methods alone, particularly when faster growing young fish are disproportionately selected for. Substitution of observed or adjusted length-at-age data with back-calculated length-at-age data provided more realistic estimates of length for younger ages than observed or adjusted methods as well as more realistic estimates of mean maximum length than those derived from backcalculation methods alone.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Growth of a temperate reefa-ssociated fish, the purple wrasse (Notolabrus fucicola), was examined from two sites on the east coast of Tasmania by using age- and length-based models. Models based on the von Bertalanffy growth function, in the standard and a reparameterized form, were constructed by using otolith-derived age estimates. Growth trajectories from tag-recaptures were used to construct length-based growth models derived from the GROTAG model, in turn a reparameterization of the Fabens model. Likelihood ratio tests (LRTs) determined the optimal parameterization of the GROTAG model, including estimators of individual growth variability, seasonal growth, measurement error, and outliers for each data set. Growth models and parameter estimates were compared by bootstrap confidence intervals, LRTs, and randomization tests and plots of bootstrap parameter estimates. The relative merit of these methods for comparing models and parameters was evaluated; LRTs combined with bootstrapping and randomization tests provided the most insight into the relationships between parameter estimates. Significant differences in growth of purple wrasse were found between sites in both length- and age-based models. A significant difference in the peak growth season was found between sites, and a large difference in growth rate between sexes was found at one site with the use of length-based models.