998 resultados para SPECKLE MODEL ESTIMATOR


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern stock assessment methods provide the machinery for determining the status of a stock in relation to certain reference points and for estimating how quickly a stock can be rebuilt. However, these methods typically require catch data, which are not always available. We introduce a model-based framework for estimating reference points, stock status, and recovery times in situations where catch data and other measures of absolute abundance are unavailable. The specif ic estimator developed is essentially an age-structured production model recast in terms relative to pre-exploitation levels. A Bayesian estimation scheme is adopted to allow the incorporation of pertinent auxiliary information such as might be obtained from meta-analyses of similar stocks or anecdotal observations. The approach is applied to the population of goliath grouper (Epinephelus itajara) off southern Florida, for which there are three indices of relative abundance but no reliable catch data. The results confirm anecdotal accounts of a marked decline in abundance during the 1980s followed by a substantial increase after the harvest of goliath grouper was banned in 1990. The ban appears to have reduced fishing pressure to between 10% and 50% of the levels observed during the 1980s. Nevertheless, the predicted fishing mortality rate under the ban appears to remain substantial, perhaps owing to illegal harvest and depth-related release mortality. As a result, the base model predicts that there is less than a 40% chance that the spawning biomass will recover to a level that would produce a 50% spawning potential ratio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2013

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies seemingly unrelated linear models with integrated regressors and stationary errors. By adding leads and lags of the first differences of the regressors and estimating this augmented dynamic regression model by feasible generalized least squares using the long-run covariance matrix, we obtain an efficient estimator of the cointegrating vector that has a limiting mixed normal distribution. Simulation results suggest that this new estimator compares favorably with others already proposed in the literature. We apply these new estimators to the testing of purchasing power parity (PPP) among the G-7 countries. The test based on the efficient estimates rejects the PPP hypothesis for most countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers various asymptotic approximations in the near-integrated firstorder autoregressive model with a non-zero initial condition. We first extend the work of Knight and Satchell (1993), who considered the random walk case with a zero initial condition, to derive the expansion of the relevant joint moment generating function in this more general framework. We also consider, as alternative approximations, the stochastic expansion of Phillips (1987c) and the continuous time approximation of Perron (1991). We assess how these alternative methods provide or not an adequate approximation to the finite-sample distribution of the least-squares estimator in a first-order autoregressive model. The results show that, when the initial condition is non-zero, Perron's (1991) continuous time approximation performs very well while the others only offer improvements when the initial condition is zero.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents Bayes invariant quadratic unbiased estimator, for short BAIQUE. Bayesian approach is used here to estimate the covariance functions of the regionalized variables which appear in the spatial covariance structure in mixed linear model. Firstly a brief review of spatial process, variance covariance components structure and Bayesian inference is given, since this project deals with these concepts. Then the linear equations model corresponding to BAIQUE in the general case is formulated. That Bayes estimator of variance components with too many unknown parameters is complicated to be solved analytically. Hence, in order to facilitate the handling with this system, BAIQUE of spatial covariance model with two parameters is considered. Bayesian estimation arises as a solution of a linear equations system which requires the linearity of the covariance functions in the parameters. Here the availability of prior information on the parameters is assumed. This information includes apriori distribution functions which enable to find the first and the second moments matrix. The Bayesian estimation suggested here depends only on the second moment of the prior distribution. The estimation appears as a quadratic form y'Ay , where y is the vector of filtered data observations. This quadratic estimator is used to estimate the linear function of unknown variance components. The matrix A of BAIQUE plays an important role. If such a symmetrical matrix exists, then Bayes risk becomes minimal and the unbiasedness conditions are fulfilled. Therefore, the symmetry of this matrix is elaborated in this work. Through dealing with the infinite series of matrices, a representation of the matrix A is obtained which shows the symmetry of A. In this context, the largest singular value of the decomposed matrix of the infinite series is considered to deal with the convergence condition and also it is connected with Gerschgorin Discs and Poincare theorem. Then the BAIQUE model for some experimental designs is computed and compared. The comparison deals with different aspects, such as the influence of the position of the design points in a fixed interval. The designs that are considered are those with their points distributed in the interval [0, 1]. These experimental structures are compared with respect to the Bayes risk and norms of the matrices corresponding to distances, covariance structures and matrices which have to satisfy the convergence condition. Also different types of the regression functions and distance measurements are handled. The influence of scaling on the design points is studied, moreover, the influence of the covariance structure on the best design is investigated and different covariance structures are considered. Finally, BAIQUE is applied for real data. The corresponding outcomes are compared with the results of other methods for the same data. Thereby, the special BAIQUE, which estimates the general variance of the data, achieves a very close result to the classical empirical variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diebold and Lamb (1997) argue that since the long-run elasticity of supply derived from the Nerlovian model entails a ratio of random variables, it is without moments. They propose minimum expected loss estimation to correct this problem but in so-doing ignore the fact that a non white-noise-error is implicit in the model. We show that, as a consequence the estimator is biased and demonstrate that Bayesian estimation which fully accounts for the error structure is preferable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we consider the rendering equation derived from the illumination model called Cook-Torrance model. A Monte Carlo (MC) estimator for numerical treatment of the this equation, which is the Fredholm integral equation of second kind, is constructed and studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new adaptive nonlinear equalizer relying on a radial basis function (RBF) model, which is designed based on the minimum bit error rate (MBER) criterion, in the system setting of the intersymbol interference channel plus a co-channel interference. Our proposed algorithm is referred to as the on-line mixture of Gaussians estimator aided MBER (OMG-MBER) equalizer. Specifically, a mixture of Gaussians based probability density function (PDF) estimator is used to model the PDF of the decision variable, for which a novel on-line PDF update algorithm is derived to track the incoming data. With the aid of this novel on-line mixture of Gaussians based sample-by-sample updated PDF estimator, our adaptive nonlinear equalizer is capable of updating its equalizer’s parameters sample by sample to aim directly at minimizing the RBF nonlinear equalizer’s achievable bit error rate (BER). The proposed OMG-MBER equalizer significantly outperforms the existing on-line nonlinear MBER equalizer, known as the least bit error rate equalizer, in terms of both the convergence speed and the achievable BER, as is confirmed in our simulation study

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lincoln–Petersen estimator is one of the most popular estimators used in capture–recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11, f10, f01, f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln–Petersen estimator provides an estimate for f00. In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln–Petersen’s, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln–Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey.