152 resultados para Measurement Error Estimation
Resumo:
In this paper we obtain the linear minimum mean square estimator (LMMSE) for discrete-time linear systems subject to state and measurement multiplicative noises and Markov jumps on the parameters. It is assumed that the Markov chain is not available. By using geometric arguments we obtain a Kalman type filter conveniently implementable in a recurrence form. The stationary case is also studied and a proof for the convergence of the error covariance matrix of the LMMSE to a stationary value under the assumption of mean square stability of the system and ergodicity of the associated Markov chain is obtained. It is shown that there exists a unique positive semi-definite solution for the stationary Riccati-like filter equation and, moreover, this solution is the limit of the error covariance matrix of the LMMSE. The advantage of this scheme is that it is very easy to implement and all calculations can be performed offline. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The zero-inflated negative binomial model is used to account for overdispersion detected in data that are initially analyzed under the zero-Inflated Poisson model A frequentist analysis a jackknife estimator and a non-parametric bootstrap for parameter estimation of zero-inflated negative binomial regression models are considered In addition an EM-type algorithm is developed for performing maximum likelihood estimation Then the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and some ways to perform global influence analysis are derived In order to study departures from the error assumption as well as the presence of outliers residual analysis based on the standardized Pearson residuals is discussed The relevance of the approach is illustrated with a real data set where It is shown that zero-inflated negative binomial regression models seems to fit the data better than the Poisson counterpart (C) 2010 Elsevier B V All rights reserved
Resumo:
The DSSAT/CANEGRO model was parameterized and its predictions evaluated using data from five sugarcane (Sacchetrum spp.) experiments conducted in southern Brazil. The data used are from two of the most important Brazilian cultivars. Some parameters whose values were either directly measured or considered to be well known were not adjusted. Ten of the 20 parameters were optimized using a Generalized Likelihood Uncertainty Estimation (GLUE) algorithm using the leave-one-out cross-validation technique. Model predictions were evaluated using measured data of leaf area index (LA!), stalk and aerial dry mass, sucrose content, and soil water content, using bias, root mean squared error (RMSE), modeling efficiency (Eff), correlation coefficient, and agreement index. The Decision Support System for Agrotechnology Transfer (DSSAT)/CANEGRO model simulated the sugarcane crop in southern Brazil well, using the parameterization reported here. The soil water content predictions were better for rainfed (mean RMSE = 0.122mm) than for irrigated treatment (mean RMSE = 0.214mm). Predictions were best for aerial dry mass (Eff = 0.850), followed by stalk dry mass (Eff = 0.765) and then sucrose mass (Eff = 0.170). Number of green leaves showed the worst fit (Eff = -2.300). The cross-validation technique permits using multiple datasets that would have limited use if used independently because of the heterogeneity of measures and measurement strategies.
Resumo:
The aim of this study was to use DSC and X-ray diffraction measurements to determine the pore size and pore wall thickness of highly ordered SBA-15 materials. The DSC curves showed two endothermic events during the heating cycle. These events were due to the presence of water inside and outside of mesopores. The results of pore radius, wall thickness and pore volume measurements were in good agreement with the results obtained by nitrogen adsorption measurement, XRD and transmission electron microscopy.
Resumo:
Background: Food portion size estimation involves a complex mental process that may influence food consumption evaluation. Knowing the variables that influence this process can improve the accuracy of dietary assessment. The present study aimed to evaluate the ability of nutrition students to estimate food portions in usual meals and relate food energy content with errors in food portion size estimation. Methods: Seventy-eight nutrition students, who had already studied food energy content, participated in this cross-sectional study on the estimation of food portions, organised into four meals. The participants estimated the quantity of each food, in grams or millilitres, with the food in view. Estimation errors were quantified, and their magnitude were evaluated. Estimated quantities (EQ) lower than 90% and higher than 110% of the weighed quantity (WQ) were considered to represent underestimation and overestimation, respectively. Correlation between food energy content and error on estimation was analysed by the Spearman correlation, and comparison between the mean EQ and WQ was accomplished by means of the Wilcoxon signed rank test (P < 0.05). Results: A low percentage of estimates (18.5%) were considered accurate (+/- 10% of the actual weight). The most frequently underestimated food items were cauliflower, lettuce, apple and papaya; the most often overestimated items were milk, margarine and sugar. A significant positive correlation between food energy density and estimation was found (r = 0.8166; P = 0.0002). Conclusions: The results obtained in the present study revealed a low percentage of acceptable estimations of food portion size by nutrition students, with trends toward overestimation of high-energy food items and underestimation of low-energy items.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Two indigenous ceramics fragments, one from Lagoa Queimada (LQ) and another from Barra dos Negros (BN), both sites located on Bahia state (Brazil), were dated by termoluminescence (TL) method. Each fragment was physically prepared and divided into two fractions, one was used for TL measurement and the other for annual dose determination. The TL fraction was chemically treated, divided in sub samples and irradiated with several doses. The plot extrapolation from TL intensities as function of radiation dose enabled the determination of the accumulated dose (D(ac)), 3.99 Gy and 1.88 Gy for LQ and BN, respectively. The annual dose was obtained through the uranium, thorium and potassium determination by ICP-MS. The annual doses (D(an)) obtained were 2.86 and 2.26 mGy/year. The estimated ages were similar to 1375 and 709 y for BN and LQ ceramics, respectively. The ages agreed with the archaeologists` estimation for the Aratu and Tupi tradition periods, respectively.
Resumo:
In this article, we present the EM-algorithm for performing maximum likelihood estimation of an asymmetric linear calibration model with the assumption of skew-normally distributed error. A simulation study is conducted for evaluating the performance of the calibration estimator with interpolation and extrapolation situations. As one application in a real data set, we fitted the model studied in a dimensional measurement method used for calculating the testicular volume through a caliper and its calibration by using ultrasonography as the standard method. By applying this methodology, we do not need to transform the variables to have symmetrical errors. Another interesting aspect of the approach is that the developed transformation to make the information matrix nonsingular, when the skewness parameter is near zero, leaves the parameter of interest unchanged. Model fitting is implemented and the best choice between the usual calibration model and the model proposed in this article was evaluated by developing the Akaike information criterion, Schwarz`s Bayesian information criterion and Hannan-Quinn criterion.
Resumo:
We have considered a Bayesian approach for the nonlinear regression model by replacing the normal distribution on the error term by some skewed distributions, which account for both skewness and heavy tails or skewness alone. The type of data considered in this paper concerns repeated measurements taken in time on a set of individuals. Such multiple observations on the same individual generally produce serially correlated outcomes. Thus, additionally, our model does allow for a correlation between observations made from the same individual. We have illustrated the procedure using a data set to study the growth curves of a clinic measurement of a group of pregnant women from an obstetrics clinic in Santiago, Chile. Parameter estimation and prediction were carried out using appropriate posterior simulation schemes based in Markov Chain Monte Carlo methods. Besides the deviance information criterion (DIC) and the conditional predictive ordinate (CPO), we suggest the use of proper scoring rules based on the posterior predictive distribution for comparing models. For our data set, all these criteria chose the skew-t model as the best model for the errors. These DIC and CPO criteria are also validated, for the model proposed here, through a simulation study. As a conclusion of this study, the DIC criterion is not trustful for this kind of complex model.
Resumo:
The Grubbs` measurement model is frequently used to compare several measuring devices. It is common to assume that the random terms have a normal distribution. However, such assumption makes the inference vulnerable to outlying observations, whereas scale mixtures of normal distributions have been an interesting alternative to produce robust estimates, keeping the elegancy and simplicity of the maximum likelihood theory. The aim of this paper is to develop an EM-type algorithm for the parameter estimation, and to use the local influence method to assess the robustness aspects of these parameter estimates under some usual perturbation schemes, In order to identify outliers and to criticize the model building we use the local influence procedure in a Study to compare the precision of several thermocouples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The aim of this article is to discuss the estimation of the systematic risk in capital asset pricing models with heavy-tailed error distributions to explain the asset returns. Diagnostic methods for assessing departures from the model assumptions as well as the influence of observations on the parameter estimates are also presented. It may be shown that outlying observations are down weighted in the maximum likelihood equations of linear models with heavy-tailed error distributions, such as Student-t, power exponential, logistic II, so on. This robustness aspect may also be extended to influential observations. An application in which the systematic risk estimate of Microsoft is compared under normal and heavy-tailed errors is presented for illustration.
Resumo:
We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A. G. Patriota and A. J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655-1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.
Resumo:
The purpose of the present study was to assess the association between overbite and craniofacial growth pattern. The sample comprised eighty-six cephalograms obtained during the orthodontic pretreatment phase and analyzed using the Radiocef program to identify the craniofacial landmarks and perform orthodontic measurements. The variables utilized were overbite, the Jarabak percentage and the Vert index, as well as classifications resulting from the interpretation of these measurements. In all the statistical tests, a significance level of 5% was considered. Measurement reliability was checked by calculating method error. Weighted Kappa analysis showed that agreement between the facial types defined by the Vert index and the direction of growth trend established by the Jarabak percentage was not satisfactory. Owing to this lack of equivalency, a potential association between overbite and craniofacial growth pattern was evaluated using the chi-square test, considering the two methods separately. No relationship of dependence between overbite and craniofacial growth pattern was revealed by the results obtained. Therefore, it can be concluded that the classification of facial growth pattern will not be the same when considering the Jarabak and the Ricketts anayses, and that increased overbite cannot be associated with a braquifacial growth pattern, nor can openbite be associated with a dolichofacial growth pattern.
Resumo:
Two experiments evaluated an operant procedure for establishing stimulus control using auditory and electrical stimuli as a baseline for measuring the electrical current threshold of electrodes implanted in the cochlea. Twenty-one prelingually deaf children, users of cochlear implants, learned a Go/No Go auditory discrimination task (i.e., pressing a button in the presence of the stimulus but not in its absence). When the simple discrimination baseline became stable, the electrical current was manipulated in descending and ascending series according to an adapted staircase method. Thresholds were determined for three electrodes, one in each location in the cochlea (basal, medial, and apical). Stimulus control was maintained within a certain range of decreasing electrical current but was eventually disrupted. Increasing the current recovered stimulus control, thus allowing the determination of a range of electrical currents that could be defined as the threshold. The present study demonstrated the feasibility of the operant procedure combined with a psychophysical method for threshold assessment, thus contributing to the routine fitting and maintenance of cochlear implants within the limitations of a hospital setting.
Resumo:
The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).