949 resultados para Linear quadratic Gaussian
Resumo:
In this paper we propose exact likelihood-based mean-variance efficiency tests of the market portfolio in the context of Capital Asset Pricing Model (CAPM), allowing for a wide class of error distributions which include normality as a special case. These tests are developed in the frame-work of multivariate linear regressions (MLR). It is well known however that despite their simple statistical structure, standard asymptotically justified MLR-based tests are unreliable. In financial econometrics, exact tests have been proposed for a few specific hypotheses [Jobson and Korkie (Journal of Financial Economics, 1982), MacKinlay (Journal of Financial Economics, 1987), Gib-bons, Ross and Shanken (Econometrica, 1989), Zhou (Journal of Finance 1993)], most of which depend on normality. For the gaussian model, our tests correspond to Gibbons, Ross and Shanken’s mean-variance efficiency tests. In non-gaussian contexts, we reconsider mean-variance efficiency tests allowing for multivariate Student-t and gaussian mixture errors. Our framework allows to cast more evidence on whether the normality assumption is too restrictive when testing the CAPM. We also propose exact multivariate diagnostic checks (including tests for multivariate GARCH and mul-tivariate generalization of the well known variance ratio tests) and goodness of fit tests as well as a set estimate for the intervening nuisance parameters. Our results [over five-year subperiods] show the following: (i) multivariate normality is rejected in most subperiods, (ii) residual checks reveal no significant departures from the multivariate i.i.d. assumption, and (iii) mean-variance efficiency tests of the market portfolio is not rejected as frequently once it is allowed for the possibility of non-normal errors.
Resumo:
In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.
Resumo:
Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal
Resumo:
The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors
Resumo:
The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals
Resumo:
This work presents Bayes invariant quadratic unbiased estimator, for short BAIQUE. Bayesian approach is used here to estimate the covariance functions of the regionalized variables which appear in the spatial covariance structure in mixed linear model. Firstly a brief review of spatial process, variance covariance components structure and Bayesian inference is given, since this project deals with these concepts. Then the linear equations model corresponding to BAIQUE in the general case is formulated. That Bayes estimator of variance components with too many unknown parameters is complicated to be solved analytically. Hence, in order to facilitate the handling with this system, BAIQUE of spatial covariance model with two parameters is considered. Bayesian estimation arises as a solution of a linear equations system which requires the linearity of the covariance functions in the parameters. Here the availability of prior information on the parameters is assumed. This information includes apriori distribution functions which enable to find the first and the second moments matrix. The Bayesian estimation suggested here depends only on the second moment of the prior distribution. The estimation appears as a quadratic form y'Ay , where y is the vector of filtered data observations. This quadratic estimator is used to estimate the linear function of unknown variance components. The matrix A of BAIQUE plays an important role. If such a symmetrical matrix exists, then Bayes risk becomes minimal and the unbiasedness conditions are fulfilled. Therefore, the symmetry of this matrix is elaborated in this work. Through dealing with the infinite series of matrices, a representation of the matrix A is obtained which shows the symmetry of A. In this context, the largest singular value of the decomposed matrix of the infinite series is considered to deal with the convergence condition and also it is connected with Gerschgorin Discs and Poincare theorem. Then the BAIQUE model for some experimental designs is computed and compared. The comparison deals with different aspects, such as the influence of the position of the design points in a fixed interval. The designs that are considered are those with their points distributed in the interval [0, 1]. These experimental structures are compared with respect to the Bayes risk and norms of the matrices corresponding to distances, covariance structures and matrices which have to satisfy the convergence condition. Also different types of the regression functions and distance measurements are handled. The influence of scaling on the design points is studied, moreover, the influence of the covariance structure on the best design is investigated and different covariance structures are considered. Finally, BAIQUE is applied for real data. The corresponding outcomes are compared with the results of other methods for the same data. Thereby, the special BAIQUE, which estimates the general variance of the data, achieves a very close result to the classical empirical variance.
Resumo:
The relationship between minimum variance and minimum expected quadratic loss feedback controllers for linear univariate discrete-time stochastic systems is reviewed by taking the approach used by Caines. It is shown how the two methods can be regarded as providing identical control actions as long as a noise-free measurement state-space model is employed.
Resumo:
Radial basis function networks can be trained quickly using linear optimisation once centres and other associated parameters have been initialised. The authors propose a small adjustment to a well accepted initialisation algorithm which improves the network accuracy over a range of problems. The algorithm is described and results are presented.
Resumo:
This paper addresses the problem of tracking line segments corresponding to on-line handwritten obtained through a digitizer tablet. The approach is based on Kalman filtering to model linear portions of on-line handwritten, particularly, handwritten numerals, and to detect abrupt changes in handwritten direction underlying a model change. This approach uses a Kalman filter framework constrained by a normalized line equation, where quadratic terms are linearized through a first-order Taylor expansion. The modeling is then carried out under the assumption that the state is deterministic and time-invariant, while the detection relies on double thresholding mechanism which tests for a violation of this assumption. The first threshold is based on an approach of layout kinetics. The second one takes into account the jump in angle between the past observed direction of layout and its current direction. The method proposed enables real-time processing. To illustrate the methodology proposed, some results obtained from handwritten numerals are presented.
Resumo:
Feedback design for a second-order control system leads to an eigenstructure assignment problem for a quadratic matrix polynomial. It is desirable that the feedback controller not only assigns specified eigenvalues to the second-order closed loop system but also that the system is robust, or insensitive to perturbations. We derive here new sensitivity measures, or condition numbers, for the eigenvalues of the quadratic matrix polynomial and define a measure of the robustness of the corresponding system. We then show that the robustness of the quadratic inverse eigenvalue problem can be achieved by solving a generalized linear eigenvalue assignment problem subject to structured perturbations. Numerically reliable methods for solving the structured generalized linear problem are developed that take advantage of the special properties of the system in order to minimize the computational work required. In this part of the work we treat the case where the leading coefficient matrix in the quadratic polynomial is nonsingular, which ensures that the polynomial is regular. In a second part, we will examine the case where the open loop matrix polynomial is not necessarily regular.
Resumo:
A new incremental four-dimensional variational (4D-Var) data assimilation algorithm is introduced. The algorithm does not require the computationally expensive integrations with the nonlinear model in the outer loops. Nonlinearity is accounted for by modifying the linearization trajectory of the observation operator based on integrations with the tangent linear (TL) model. This allows us to update the linearization trajectory of the observation operator in the inner loops at negligible computational cost. As a result the distinction between inner and outer loops is no longer necessary. The key idea on which the proposed 4D-Var method is based is that by using Gaussian quadrature it is possible to get an exact correspondence between the nonlinear time evolution of perturbations and the time evolution in the TL model. It is shown that J-point Gaussian quadrature can be used to derive the exact adjoint-based observation impact equations and furthermore that it is straightforward to account for the effect of multiple outer loops in these equations if the proposed 4D-Var method is used. The method is illustrated using a three-level quasi-geostrophic model and the Lorenz (1996) model.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
We examine differential equations where nonlinearity is a result of the advection part of the total derivative or the use of quadratic algebraic constraints between state variables (such as the ideal gas law). We show that these types of nonlinearity can be accounted for in the tangent linear model by a suitable choice of the linearization trajectory. Using this optimal linearization trajectory, we show that the tangent linear model can be used to reproduce the exact nonlinear error growth of perturbations for more than 200 days in a quasi-geostrophic model and more than (the equivalent of) 150 days in the Lorenz 96 model. We introduce an iterative method, purely based on tangent linear integrations, that converges to this optimal linearization trajectory. The main conclusion from this article is that this iterative method can be used to account for nonlinearity in estimation problems without using the nonlinear model. We demonstrate this by performing forecast sensitivity experiments in the Lorenz 96 model and show that we are able to estimate analysis increments that improve the two-day forecast using only four backward integrations with the tangent linear model. Copyright © 2011 Royal Meteorological Society
Resumo:
The validity of approximating radiative heating rates in the middle atmosphere by a local linear relaxation to a reference temperature state (i.e., ‘‘Newtonian cooling’’) is investigated. Using radiative heating rate and temperature output from a chemistry–climate model with realistic spatiotemporal variability and realistic chemical and radiative parameterizations, it is found that a linear regressionmodel can capture more than 80% of the variance in longwave heating rates throughout most of the stratosphere and mesosphere, provided that the damping rate is allowed to vary with height, latitude, and season. The linear model describes departures from the climatological mean, not from radiative equilibrium. Photochemical damping rates in the upper stratosphere are similarly diagnosed. Threeimportant exceptions, however, are found.The approximation of linearity breaks down near the edges of the polar vortices in both hemispheres. This nonlinearity can be well captured by including a quadratic term. The use of a scale-independentdamping rate is not well justified in the lower tropical stratosphere because of the presence of a broad spectrum of vertical scales. The local assumption fails entirely during the breakup of the Antarctic vortex, where large fluctuations in temperature near the top of the vortex influence longwave heating rates within the quiescent region below. These results are relevant for mechanistic modeling studies of the middle atmosphere, particularly those investigating the final Antarctic warming.