153 resultados para A posteriori error estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides a systematic approach to theproblem of nondata aided symbol-timing estimation for linearmodulations. The study is performed under the unconditionalmaximum likelihood framework where the carrier-frequencyerror is included as a nuisance parameter in the mathematicalderivation. The second-order moments of the received signal arefound to be the sufficient statistics for the problem at hand and theyallow the provision of a robust performance in the presence of acarrier-frequency error uncertainty. We particularly focus on theexploitation of the cyclostationary property of linear modulations.This enables us to derive simple and closed-form symbol-timingestimators which are found to be based on the well-known squaretiming recovery method by Oerder and Meyr. Finally, we generalizethe OM method to the case of linear modulations withoffset formats. In this case, the square-law nonlinearity is foundto provide not only the symbol-timing but also the carrier-phaseerror.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to introduce a fourth-order cost function of the displaced frame difference (DFD) capable of estimatingmotion even for small regions or blocks. Using higher than second-orderstatistics is appropriate in case the image sequence is severely corruptedby additive Gaussian noise. Some results are presented and compared to those obtained from the mean kurtosis and the mean square error of the DFD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a general framework for the design of second-order blind estimators without adopting anyapproximation about the observation statistics or the a prioridistribution of the parameters. The proposed solution is obtainedminimizing the estimator variance subject to some constraints onthe estimator bias. The resulting optimal estimator is found todepend on the observation fourth-order moments that can be calculatedanalytically from the known signal model. Unfortunately,in most cases, the performance of this estimator is severely limitedby the residual bias inherent to nonlinear estimation problems.To overcome this limitation, the second-order minimum varianceunbiased estimator is deduced from the general solution by assumingaccurate prior information on the vector of parameters.This small-error approximation is adopted to design iterativeestimators or trackers. It is shown that the associated varianceconstitutes the lower bound for the variance of any unbiasedestimator based on the sample covariance matrix.The paper formulation is then applied to track the angle-of-arrival(AoA) of multiple digitally-modulated sources by means ofa uniform linear array. The optimal second-order tracker is comparedwith the classical maximum likelihood (ML) blind methodsthat are shown to be quadratic in the observed data as well. Simulationshave confirmed that the discrete nature of the transmittedsymbols can be exploited to improve considerably the discriminationof near sources in medium-to-high SNR scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential Monte–Carlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimization of most pesticide and fertilizer applications is based on overall grove conditions. In this work we measurements. Recently, Wei [9, 10] used a terrestrial propose a measurement system based on a ground laser scanner to LIDAR to measure tree height, width and volume developing estimate the volume of the trees and then extrapolate their foliage a set of experiments to evaluate the repeatability and surface in real-time. Tests with pear trees demonstrated that the accuracy of the measurements, obtaining a coefficient of relation between the volume and the foliage can be interpreted as variation of 5.4% and a relative error of 4.4% in the linear with a coefficient of correlation (R) of 0.81 and the foliar estimation of the volume but without real-time capabilities. surface can be estimated with an average error less than 5 %.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electroencephalographic (EEG) recordings are, most of the times, corrupted by spurious artifacts, which should be rejected or cleaned by the practitioner. As human scalp EEG screening is error-prone, automatic artifact detection is an issue of capital importance, to ensure objective and reliable results. In this paper we propose a new approach for discrimination of muscular activity in the human scalp quantitative EEG (QEEG), based on the time-frequency shape analysis. The impact of the muscular activity on the EEG can be evaluated from this methodology. We present an application of this scoring as a preprocessing step for EEG signal analysis, in order to evaluate the amount of muscular activity for two set of EEG recordings for dementia patients with early stage of Alzheimer’s disease and control age-matched subjects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Properties of GMM estimators for panel data, which have become very popular in the empirical economic growth literature, are not well known when the number of individuals is small. This paper analyses through Monte Carlo simulations the properties of various GMM and other estimators when the number of individuals is the one typically available in country growth studies. It is found that, provided that some persistency is present in the series, the system GMM estimator has a lower bias and higher efficiency than all the other estimators analysed, including the standard first-differences GMM estimator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is the first to examine the implications of switching to PT work for women's subsequent earnings trajectories, distinguishing by their type of contract: permanent or fixedterm. Using a rich longitudinal Spanish data set from Social Security records of over 76,000 prime-aged women strongly attached to the Spanish labor market, we find that PT work aggravates the segmentation of the labor market insofar there is a PT pay penalty and this penalty is larger and more persistent in the case of women with fixed-term contracts. The paper discusses problems arising in empirical estimation (including a problem not discussed in the literature up to now: the differential measurement error of the LHS variable by PT status), and how to address them. It concludes with policy implications relevant for Continental Europe and its dual structure of employment protection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement (SCR), under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Report for the scientific sojourn at the the Philipps-Universität Marburg, Germany, from september to december 2007. For the first, we employed the Energy-Decomposition Analysis (EDA) to investigate aromaticity on Fischer carbenes as it is related through all the reaction mechanisms studied in my PhD thesis. This powerful tool, compared with other well-known aromaticity indices in the literature like NICS, is useful not only for quantitative results but also to measure the degree of conjugation or hyperconjugation in molecules. Our results showed for the annelated benzenoid systems studied here, that electron density is more concentrated on the outer rings than in the central one. The strain-induced bond localization plays a major role as a driven force to keep the more substituted ring as the less aromatic. The discussion presented in this work was contrasted at different levels of theory to calibrate the method and ensure the consistency of our results. We think these conclusions can also be extended to arene chemistry for explaining aromaticity and regioselectivity reactions found in those systems.In the second work, we have employed the Turbomole program package and density-functionals of the best performance in the state of art, to explore reaction mechanisms in the noble gas chemistry. Particularly, we were interested in compounds of the form H--Ng--Ng--F (where Ng (Noble Gas) = Ar, Kr and Xe) and we investigated the relative stability of these species. Our quantum chemical calculations predict that the dixenon compound HXeXeF has an activation barrier for decomposition of 11 kcal/mol which should be large enough to identify the molecule in a low-temperature matrix. The other noble gases present lower activation barriers and therefore are more labile and difficult to be observable systems experimentally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines why a financial entity’s solvency capital estimation might be underestimated if the total amount required is obtained directly from a risk measurement. Using Monte Carlo simulation we show that, in some instances, a common risk measure such as Value-at-Risk is not subadditive when certain dependence structures are considered. Higher risk evaluations are obtained for independence between random variables than those obtained in the case of comonotonicity. The paper stresses, therefore, the relationship between dependence structures and capital estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Report for the scientific sojourn carried out at the University of California at Berkeley, from September to December 2007. Environmental niche modelling (ENM) techniques are powerful tools to predict species potential distributions. In the last ten years, a plethora of novel methodological approaches and modelling techniques have been developed. During three months, I stayed at the University of California, Berkeley, working under the supervision of Dr. David R. Vieites. The aim of our work was to quantify the error committed by these techniques, but also to test how an increase in the sample size affects the resultant predictions. Using MaxEnt software we generated distribution predictive maps, from different sample sizes, of the Eurasian quail (Coturnix coturnix) in the Iberian Peninsula. The quail is a generalist species from a climatic point of view, but an habitat specialist. The resultant distribution maps were compared with the real distribution of the species. This distribution was obtained from recent bird atlases from Spain and Portugal. Results show that ENM techniques can have important errors when predicting the species distribution of generalist species. Moreover, an increase of sample size is not necessary related with a better performance of the models. We conclude that a deep knowledge of the species’ biology and the variables affecting their distribution is crucial for an optimal modelling. The lack of this knowledge can induce to wrong conclusions.