939 resultados para robust estimation statistics
Resumo:
The objective of this work was to develop a procedure to estimate soybean crop areas in Rio Grande do Sul state, Brazil. Estimations were made based on the temporal profiles of the enhanced vegetation index (Evi) calculated from moderate resolution imaging spectroradiometer (Modis) images. The methodology developed for soybean classification was named Modis crop detection algorithm (MCDA). The MCDA provides soybean area estimates in December (first forecast), using images from the sowing period, and March (second forecast), using images from the sowing and maximum crop development periods. The results obtained by the MCDA were compared with the official estimates on soybean area of the Instituto Brasileiro de Geografia e Estatística. The coefficients of determination ranged from 0.91 to 0.95, indicating good agreement between the estimates. For the 2000/2001 crop year, the MCDA soybean crop map was evaluated using a soybean crop map derived from Landsat images, and the overall map accuracy was approximately 82%, with similar commission and omission errors. The MCDA was able to estimate soybean crop areas in Rio Grande do Sul State and to generate an annual thematic map with the geographic position of the soybean fields. The soybean crop area estimates by the MCDA are in good agreement with the official agricultural statistics.
Resumo:
Abstract
Resumo:
The problem of robust beamformer design for mobile communicationsapplications in the presence of moving co-channel sources isaddressed. A generalization of the optimum beamformer based on a statisticalmodel accounting for source movement is proposed. The new methodis easily implemented and is shown to offer dramatic improvements overconventional optimum beamforming for moving sources under a varietyof operating conditions.
Resumo:
This work provides a general framework for the design of second-order blind estimators without adopting anyapproximation about the observation statistics or the a prioridistribution of the parameters. The proposed solution is obtainedminimizing the estimator variance subject to some constraints onthe estimator bias. The resulting optimal estimator is found todepend on the observation fourth-order moments that can be calculatedanalytically from the known signal model. Unfortunately,in most cases, the performance of this estimator is severely limitedby the residual bias inherent to nonlinear estimation problems.To overcome this limitation, the second-order minimum varianceunbiased estimator is deduced from the general solution by assumingaccurate prior information on the vector of parameters.This small-error approximation is adopted to design iterativeestimators or trackers. It is shown that the associated varianceconstitutes the lower bound for the variance of any unbiasedestimator based on the sample covariance matrix.The paper formulation is then applied to track the angle-of-arrival(AoA) of multiple digitally-modulated sources by means ofa uniform linear array. The optimal second-order tracker is comparedwith the classical maximum likelihood (ML) blind methodsthat are shown to be quadratic in the observed data as well. Simulationshave confirmed that the discrete nature of the transmittedsymbols can be exploited to improve considerably the discriminationof near sources in medium-to-high SNR scenarios.
Resumo:
This correspondence addresses the problem of nondata-aidedwaveform estimation for digital communications. Based on the unconditionalmaximum likelihood criterion, the main contribution of this correspondenceis the derivation of a closed-form solution to the waveform estimationproblem in the low signal-to-noise ratio regime. The proposed estimationmethod is based on the second-order statistics of the received signaland a clear link is established between maximum likelihood estimation andcorrelation matching techniques. Compression with the signal-subspace isalso proposed to improve the robustness against the noise and to mitigatethe impact of abnormals or outliers.
Resumo:
Statistics has become an indispensable tool in biomedical research. Thanks, in particular, to computer science, the researcher has easy access to elementary "classical" procedures. These are often of a "confirmatory" nature: their aim is to test hypotheses (for example the efficacy of a treatment) prior to experimentation. However, doctors often use them in situations more complex than foreseen, to discover interesting data structures and formulate hypotheses. This inverse process may lead to misuse which increases the number of "statistically proven" results in medical publications. The help of a professional statistician thus becomes necessary. Moreover, good, simple "exploratory" techniques are now available. In addition, medical data contain quite a high percentage of outliers (data that deviate from the majority). With classical methods it is often very difficult (even for a statistician!) to detect them and the reliability of results becomes questionable. New, reliable ("robust") procedures have been the subject of research for the past two decades. Their practical introduction is one of the activities of the Statistics and Data Processing Department of the University of Social and Preventive Medicine, Lausanne.
Resumo:
We consider robust parametric procedures for univariate discrete distributions, focusing on the negative binomial model. The procedures are based on three steps: ?First, a very robust, but possibly inefficient, estimate of the model parameters is computed. ?Second, this initial model is used to identify outliers, which are then removed from the sample. ?Third, a corrected maximum likelihood estimator is computed with the remaining observations. The final estimate inherits the breakdown point (bdp) of the initial one and its efficiency can be significantly higher. Analogous procedures were proposed in [1], [2], [5] for the continuous case. A comparison of the asymptotic bias of various estimates under point contamination points out the minimum Neyman's chi-squared disparity estimate as a good choice for the initial step. Various minimum disparity estimators were explored by Lindsay [4], who showed that the minimum Neyman's chi-squared estimate has a 50% bdp under point contamination; in addition, it is asymptotically fully efficient at the model. However, the finite sample efficiency of this estimate under the uncontaminated negative binomial model is usually much lower than 100% and the bias can be strong. We show that its performance can then be greatly improved using the three step procedure outlined above. In addition, we compare the final estimate with the procedure described in
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.
Resumo:
This paper analyses the impact of using different correlation assumptions between lines of business when estimating the risk-based capital reserve, the Solvency Capital Requirement -SCR-, under Solvency II regulations. A case study is presented and the SCR is calculated according to the Standard Model approach. Alternatively, the requirement is then calculated using an Internal Model based on a Monte Carlo simulation of the net underwriting result at a one-year horizon, with copulas being used to model the dependence between lines of business. To address the impact of these model assumptions on the SCR we conduct a sensitivity analysis. We examine changes in the correlation matrix between lines of business and address the choice of copulas. Drawing on aggregate historical data from the Spanish non-life insurance market between 2000 and 2009, we conclude that modifications of the correlation and dependence assumptions have a significant impact on SCR estimation.
Resumo:
Vehicle operations in underwater environments are often compromised by poor visibility conditions. For instance, the perception range of optical devices is heavily constrained in turbid waters, thus complicating navigation and mapping tasks in environments such as harbors, bays, or rivers. A new generation of high-definition forward-looking sonars providing acoustic imagery at high frame rates has recently emerged as a promising alternative for working under these challenging conditions. However, the characteristics of the sonar data introduce difficulties in image registration, a key step in mosaicing and motion estimation applications. In this work, we propose the use of a Fourier-based registration technique capable of handling the low resolution, noise, and artifacts associated with sonar image formation. When compared to a state-of-the art region-based technique, our approach shows superior performance in the alignment of both consecutive and nonconsecutive views as well as higher robustness in featureless environments. The method is used to compute pose constraints between sonar frames that, integrated inside a global alignment framework, enable the rendering of consistent acoustic mosaics with high detail and increased resolution. An extensive experimental section is reported showing results in relevant field applications, such as ship hull inspection and harbor mapping
Resumo:
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each condition, 5000 replications per group were generated. The results obtained by this study show an adequate type error I rate but not a high power for the confidence intervals. In general, for the two scenarios studied (mean population differences and not mean population differences) in the different conditions analysed, the Mann-Whitney U-test demonstrated strong performance, and a little worse the t-test of Yuen-Welch.
Resumo:
This study aimed to evaluate the interference of tuberculin test on the gamma-interferon (INFg) assay, to estimate the sensitivity and specificity of the INFg assay in Brazilian conditions, and to simulate multiple testing using the comparative tuberculin test and the INFg assay. Three hundred-fifty cattle from two TB-free and two TB-infected herds were submitted to the comparative tuberculin test and the INFg assay. The comparative tuberculin test was performed using avian and bovine PPD. The INFg assay was performed by the BovigamTM kit (CSL Veterinary, Australia), according to the manufacturer's specifications. Sensitivity and specificity of the INFg assay were assessed by a Bayesian latent class model. These diagnostic parameters were also estimate for multiple testing. The results of INFg assay on D0 and D3 after the comparative tuberculin test were compared by the McNemar's test and kappa statistics. Results of mean optical density from INFg assay on both days were similar. Sensitivity and specificity of the INFg assay showed results varying (95% confidence intervals) from 72 to 100% and 74 to 100% respectively. Sensitivity of parallel testing was over 97.5%, while specificity of serial testing was over 99.7%. The INFg assay proved to be a very useful diagnostic method.
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
Accelerated life testing (ALT) is widely used to obtain reliability information about a product within a limited time frame. The Cox s proportional hazards (PH) model is often utilized for reliability prediction. My master thesis research focuses on designing accelerated life testing experiments for reliability estimation. We consider multiple step-stress ALT plans with censoring. The optimal stress levels and times of changing the stress levels are investigated. We discuss the optimal designs under three optimality criteria. They are D-, A- and Q-optimal designs. We note that the classical designs are optimal only if the model assumed is correct. Due to the nature of prediction made from ALT experimental data, attained under the stress levels higher than the normal condition, extrapolation is encountered. In such case, the assumed model cannot be tested. Therefore, for possible imprecision in the assumed PH model, the method of construction for robust designs is also explored.
Resumo:
In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations.