941 resultados para Mean squared error method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little information is available on the degree of within-field variability of potential production of Tall wheatgrass (Thinopyrum ponticum) forage under unirrigated conditions. The aim of this study was to characterize the spatial variability of the accumulated biomass (AB) without nutritional limitations through vegetation indexes, and then use this information to determine potential management zones. A 27-×-27-m grid cell size was chosen and 84 biomass sampling areas (BSA), each 2 m(2) in size, were georeferenced. Nitrogen and phosphorus fertilizers were applied after an initial cut at 3 cm height. At 500 °C day, the AB from each sampling area, was collected and evaluated. The spatial variability of AB was estimated more accurately using the Normalized Difference Vegetation Index (NDVI), calculated from LANDSAT 8 images obtained on 24 November 2014 (NDVInov) and 10 December 2014 (NDVIdec) because the potential AB was highly associated with NDVInov and NDVIdec (r (2) = 0.85 and 0.83, respectively). These models between the potential AB data and NDVI were evaluated by root mean squared error (RMSE) and relative root mean squared error (RRMSE). This last coefficient was 12 and 15 % for NDVInov and NDVIdec, respectively. Potential AB and NDVI spatial correlation were quantified with semivariograms. The spatial dependence of AB was low. Six classes of NDVI were analyzed for comparison, and two management zones (MZ) were established with them. In order to evaluate if the NDVI method allows us to delimit MZ with different attainable yields, the AB estimated for these MZ were compared through an ANOVA test. The potential AB had significant differences among MZ. Based on these findings, it can be concluded that NDVI obtained from LANDSAT 8 images can be reliably used for creating MZ in soils under permanent pastures dominated by Tall wheatgrass.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The DSSAT/CANEGRO model was parameterized and its predictions evaluated using data from five sugarcane (Sacchetrum spp.) experiments conducted in southern Brazil. The data used are from two of the most important Brazilian cultivars. Some parameters whose values were either directly measured or considered to be well known were not adjusted. Ten of the 20 parameters were optimized using a Generalized Likelihood Uncertainty Estimation (GLUE) algorithm using the leave-one-out cross-validation technique. Model predictions were evaluated using measured data of leaf area index (LA!), stalk and aerial dry mass, sucrose content, and soil water content, using bias, root mean squared error (RMSE), modeling efficiency (Eff), correlation coefficient, and agreement index. The Decision Support System for Agrotechnology Transfer (DSSAT)/CANEGRO model simulated the sugarcane crop in southern Brazil well, using the parameterization reported here. The soil water content predictions were better for rainfed (mean RMSE = 0.122mm) than for irrigated treatment (mean RMSE = 0.214mm). Predictions were best for aerial dry mass (Eff = 0.850), followed by stalk dry mass (Eff = 0.765) and then sucrose mass (Eff = 0.170). Number of green leaves showed the worst fit (Eff = -2.300). The cross-validation technique permits using multiple datasets that would have limited use if used independently because of the heterogeneity of measures and measurement strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonlinear regression problems can often be reduced to linearity by transforming the response variable (e.g., using the Box-Cox family of transformations). The classic estimates of the parameter defining the transformation as well as of the regression coefficients are based on the maximum likelihood criterion, assuming homoscedastic normal errors for the transformed response. These estimates are nonrobust in the presence of outliers and can be inconsistent when the errors are nonnormal or heteroscedastic. This article proposes new robust estimates that are consistent and asymptotically normal for any unimodal and homoscedastic error distribution. For this purpose, a robust version of conditional expectation is introduced for which the prediction mean squared error is replaced with an M scale. This concept is then used to develop a nonparametric criterion to estimate the transformation parameter as well as the regression coefficients. A finite sample estimate of this criterion based on a robust version of smearing is also proposed. Monte Carlo experiments show that the new estimates compare favorably with respect to the available competitors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare a set of empirical Bayes and composite estimators of the population means of the districts (small areas) of a country, and show that the natural modelling strategy of searching for a well fitting empirical Bayes model and using it for estimation of the area-level means can be inefficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We evaluated the accuracy of skinfold thicknesses, BMI and waist circumference for the prediction of percentage body fat (PBF) in a representative sample of 372 Swiss children aged 6-13 years. PBF was measured using dual-energy X-ray absorptiometry. On the basis of a preliminary bootstrap selection of predictors, seven regression models were evaluated. All models included sex, age and pubertal stage plus one of the following predictors: (1) log-transformed triceps skinfold (logTSF); (2) logTSF and waist circumference; (3) log-transformed sum of triceps and subscapular skinfolds (logSF2); (4) log-transformed sum of triceps, biceps, subscapular and supra-iliac skinfolds (logSF4); (5) BMI; (6) waist circumference; (7) BMI and waist circumference. The adjusted determination coefficient (R² adj) and the root mean squared error (RMSE; kg) were calculated for each model. LogSF4 (R² adj 0.85; RMSE 2.35) and logSF2 (R² adj 0.82; RMSE 2.54) were similarly accurate at predicting PBF and superior to logTSF (R² adj 0.75; RMSE 3.02), logTSF combined with waist circumference (R² adj 0.78; RMSE 2.85), BMI (R² adj 0.62; RMSE 3.73), waist circumference (R² adj 0.58; RMSE 3.89), and BMI combined with waist circumference (R² adj 0.63; RMSE 3.66) (P < 0.001 for all values of R² adj). The finding that logSF4 was only modestly superior to logSF2 and that logTSF was better than BMI and waist circumference at predicting PBF has important implications for paediatric epidemiological studies aimed at disentangling the effect of body fat on health outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recientemente, ha aumentado mucho el interés por la aplicación de los modelos de memoria larga a variables económicas, sobre todo los modelos ARFIMA. Sin duda , el método más usado para la estimación de estos modelos en el ámbito del análisis económico es el propuesto por Geweke y Portero-Hudak (GPH) aun cuando en trabajos recientes se ha demostrado que, en ciertos casos, este estimador presenta un sesgo muy importante. De ahí que, se propone una extensión de este estimador a partir del modelo exponencial propuesto por Bloomfield, y que permite corregir este sesgo.A continuación, se analiza y compara el comportamiento de ambos estimadores en muestras no muy grandes y se comprueba como el estimador propuesto presenta un error cuadrático medio menor que el estimador GPH

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recientemente, ha aumentado mucho el interés por la aplicación de los modelos de memoria larga a variables económicas, sobre todo los modelos ARFIMA. Sin duda , el método más usado para la estimación de estos modelos en el ámbito del análisis económico es el propuesto por Geweke y Portero-Hudak (GPH) aun cuando en trabajos recientes se ha demostrado que, en ciertos casos, este estimador presenta un sesgo muy importante. De ahí que, se propone una extensión de este estimador a partir del modelo exponencial propuesto por Bloomfield, y que permite corregir este sesgo.A continuación, se analiza y compara el comportamiento de ambos estimadores en muestras no muy grandes y se comprueba como el estimador propuesto presenta un error cuadrático medio menor que el estimador GPH

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many unit root and cointegration tests require an estimate of the spectral density function at frequency zero at some process. Kernel estimators based on weighted sums of autocovariances constructed using estimated residuals from an AR(1) regression are commonly used. However, it is known that with substantially correlated errors, the OLS estimate of the AR(1) parameter is severely biased. in this paper, we first show that this least squares bias induces a significant increase in the bias and mean-squared error of kernel-based estimators.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les travaux portent sur l’estimation de la variance dans le cas d’une non- réponse partielle traitée par une procédure d’imputation. Traiter les valeurs imputées comme si elles avaient été observées peut mener à une sous-estimation substantielle de la variance des estimateurs ponctuels. Les estimateurs de variance usuels reposent sur la disponibilité des probabilités d’inclusion d’ordre deux, qui sont parfois difficiles (voire impossibles) à calculer. Nous proposons d’examiner les propriétés d’estimateurs de variance obtenus au moyen d’approximations des probabilités d’inclusion d’ordre deux. Ces approximations s’expriment comme une fonction des probabilités d’inclusion d’ordre un et sont généralement valides pour des plans à grande entropie. Les résultats d’une étude de simulation, évaluant les propriétés des estimateurs de variance proposés en termes de biais et d’erreur quadratique moyenne, seront présentés.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In our study we use a kernel based classification technique, Support Vector Machine Regression for predicting the Melting Point of Drug – like compounds in terms of Topological Descriptors, Topological Charge Indices, Connectivity Indices and 2D Auto Correlations. The Machine Learning model was designed, trained and tested using a dataset of 100 compounds and it was found that an SVMReg model with RBF Kernel could predict the Melting Point with a mean absolute error 15.5854 and Root Mean Squared Error 19.7576