945 resultados para Mean squared error
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
This paper uses appropriately modified information criteria to select models from the GARCH family, which are subsequently used for predicting US dollar exchange rate return volatility. The out of sample forecast accuracy of models chosen in this manner compares favourably on mean absolute error grounds, although less favourably on mean squared error grounds, with those generated by the commonly used GARCH(1, 1) model. An examination of the orders of models selected by the criteria reveals that (1, 1) models are typically selected less than 20% of the time.
Resumo:
This paper forecasts Daily Sterling exchange rate returns using various naive, linear and non-linear univariate time-series models. The accuracy of the forecasts is evaluated using mean squared error and sign prediction criteria. These show only a very modest improvement over forecasts generated by a random walk model. The Pesaran–Timmerman test and a comparison with forecasts generated artificially shows that even the best models have no evidence of market timing ability.
Resumo:
We apply the Coexistence Approach (CoA) to reconstruct mean annual precipitation (MAP), mean annual temperature (MAT), mean temperature of thewarmestmonth (MTWA) and mean temperature of the coldest month (MTCO) at 44 pollen sites on the Qinghai–Tibetan Plateau. The modern climate ranges of the taxa are obtained (1) from county-level presence/absence data and (2) from data on the optimum and range of each taxon from Lu et al. (2011). The CoA based on the optimumand range data yields better predictions of observed climate parameters at the pollen sites than that based on the county-level data. The presence of arboreal pollen, most of which is derived fromoutside the region, distorts the reconstructions. More reliable reconstructions are obtained using only the non-arboreal component of the pollen assemblages. The root mean-squared error (RMSE) of the MAP reconstructions are smaller than the RMSE of MAT, MTWA and MTCO, suggesting that precipitation gradients are the most important control of vegetation distribution on the Qinghai–Tibetan Plateau. Our results show that CoA could be used to reconstruct past climates in this region, although in areas characterized by open vegetation the most reliable estimates will be obtained by excluding possible arboreal contaminants.
Resumo:
We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Objectives:To find variables correlated to improvement with intraduodenal levodopa/carbidopa infusion (Duodopa) in order to identify potential candidates for this treatment. Two clinical studies comparing Duodopa with oral treatments in patients with advanced Parkinson’s disease have shown significant improvement in percent on-time on a global treatment response scale (TRS) based on hourly and half-hourly clinical ratings and in median UPDRS scores.Methods:Data from study 1 comparing infusion with Sinemet CR (12 patients, Nyholm et al, Clin Neuropharmacol 2003; 26(3): 156-163) and study 2 comparing infusion with individually optimised conventional combination therapies (18 patients, Nyholm et al, Neurology, in press) were used. Measures of severity were defined as total UPDRS score and scores for sections II and III, percent functional on-time and mean squared error of ratings on the TRS and as mean of diary questions about mobility and satisfaction (only study 2). Absolute improvement was defined as difference in severity, and relative improvement was defined as percent absolute improvement/severity on oral treatment. Pearson correlation coefficients between measures of improvement and other variables were calculated.Results:Correlations (r2>0.28, p<0.05) between severity during oral treatment and absolute improvement on infusion were found for: Total UPDRS, UPDRS III and TRS ratings (studies 1 and 2) and for diary question 1 (mobility) and UPDRS II (study 2). Correlation to relative improvement was found for total UPDRS (study 2, r2=0.47). Figure 1 illustrates absolute improvement in total UPDRS vs. total UPDRS during oral treatment (study 2).Conclusion:Correlating different measures of severity and improvement revealed that patients with more severe symptoms were most improved and that the relation between severity and improvement was linear within the studied groups. The result, which was reproducible between two clinical studies, could be useful when deciding candidates for the treatment.
Resumo:
Objective: We present a new evaluation of levodopa plasma concentrations and clinical effects during duodenal infusion of a levodopa/carbidopa gel (Duodopa ) in 12 patients with advanced Parkinson s disease (PD), from a study reported previously (Nyholm et al, Clin Neuropharmacol 2003; 26(3): 156-163). One objective was to investigate in what state of PD we can see the greatest benefits with infusion compared with corresponding oral treatment (Sinemet CR). Another objective was to identify fluctuating response to levodopa and correlate to variables related to disease progression. Methods: We have computed mean absolute error (MAE) and mean squared error (MSE) for the clinical rating from -3 (severe parkinsonism) to +3 (severe dyskinesia) as measures of the clinical state over the treatment periods of the study. Standard deviation (SD) of the rating was used as a measure of response fluctuations. Linear regression and visual inspection of graphs were used to estimate relationships between these measures and variables related to disease progression such as years on levodopa (YLD) or unified PD rating scale part II (UPDRS II).Results: We found that MAE for infusion had a strong linear correlation to YLD (r2=0.80) while the corresponding relation for oral treatment looked more sigmoid, particularly for the more advanced patients (YLD>18).
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.
Resumo:
In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.
Resumo:
Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).
Resumo:
Our focus is on information in expectation surveys that can now be built on thousands (or millions) of respondents on an almost continuous-time basis (big data) and in continuous macroeconomic surveys with a limited number of respondents. We show that, under standard microeconomic and econometric techniques, survey forecasts are an affine function of the conditional expectation of the target variable. This is true whether or not the survey respondent knows the data-generating process (DGP) of the target variable or the econometrician knows the respondents individual loss function. If the econometrician has a mean-squared-error risk function, we show that asymptotically efficient forecasts of the target variable can be built using Hansens (Econometrica, 1982) generalized method of moments in a panel-data context, when N and T diverge or when T diverges with N xed. Sequential asymptotic results are obtained using Phillips and Moon s (Econometrica, 1999) framework. Possible extensions are also discussed.
Resumo:
The increase in ultraviolet radiation (UV) at surface, the high incidence of non-melanoma skin cancer (NMSC) in coast of Northeast of Brazil (NEB) and reduction of total ozone were the motivation for the present study. The overall objective was to identify and understand the variability of UV or Index Ultraviolet Radiation (UV Index) in the capitals of the east coast of the NEB and adjust stochastic models to time series of UV index aiming make predictions (interpolations) and forecasts / projections (extrapolations) followed by trend analysis. The methodology consisted of applying multivariate analysis (principal component analysis and cluster analysis), Predictive Mean Matching method for filling gaps in the data, autoregressive distributed lag (ADL) and Mann-Kendal. The modeling via the ADL consisted of parameter estimation, diagnostics, residuals analysis and evaluation of the quality of the predictions and forecasts via mean squared error and Pearson correlation coefficient. The research results indicated that the annual variability of UV in the capital of Rio Grande do Norte (Natal) has a feature in the months of September and October that consisting of a stabilization / reduction of UV index because of the greater annual concentration total ozone. The increased amount of aerosol during this period contributes in lesser intensity for this event. The increased amount of aerosol during this period contributes in lesser intensity for this event. The application of cluster analysis on the east coast of the NEB showed that this event also occurs in the capitals of Paraiba (João Pessoa) and Pernambuco (Recife). Extreme events of UV in NEB were analyzed from the city of Natal and were associated with absence of cloud cover and levels below the annual average of total ozone and did not occurring in the entire region because of the uneven spatial distribution of these variables. The ADL (4, 1) model, adjusted with data of the UV index and total ozone to period 2001-2012 made a the projection / extrapolation for the next 30 years (2013-2043) indicating in end of that period an increase to the UV index of one unit (approximately), case total ozone maintain the downward trend observed in study period