899 resultados para Uncertainty of forecasts
Resumo:
A novel approach is presented for the evaluation of circulation type classifications (CTCs) in terms of their capability to predict surface climate variations. The approach is analogous to that for probabilistic meteorological forecasts and is based on the Brier skill score. This score is shown to take a particularly simple form in the context of CTCs and to quantify the resolution of a climate variable by the classifications. The sampling uncertainty of the skill can be estimated by means of nonparametric bootstrap resampling. The evaluation approach is applied for a systematic intercomparison of 71 CTCs (objective and manual, from COST Action 733) with respect to their ability to resolve daily precipitation in the Alpine region. For essentially all CTCs, the Brier skill score is found to be higher for weak and moderate compared to intense precipitation, for winter compared to summer, and over the north and west of the Alps compared to the south and east. Moreover, CTCs with a higher number of types exhibit better skill than CTCs with few types. Among CTCs with comparable type number, the best automatic classifications are found to outperform the best manual classifications. It is not possible to single out one ‘best’ classification for Alpine precipitation, but there is a small group showing particularly high skill.
Resumo:
As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.
Resumo:
Hydrological ensemble prediction systems (HEPS) have in recent years been increasingly used for the operational forecasting of floods by European hydrometeorological agencies. The most obvious advantage of HEPS is that more of the uncertainty in the modelling system can be assessed. In addition, ensemble prediction systems generally have better skill than deterministic systems both in the terms of the mean forecast performance and the potential forecasting of extreme events. Research efforts have so far mostly been devoted to the improvement of the physical and technical aspects of the model systems, such as increased resolution in time and space and better description of physical processes. Developments like these are certainly needed; however, in this paper we argue that there are other areas of HEPS that need urgent attention. This was also the result from a group exercise and a survey conducted to operational forecasters within the European Flood Awareness System (EFAS) to identify the top priorities of improvement regarding their own system. They turned out to span a range of areas, the most popular being to include verification of an assessment of past forecast performance, a multi-model approach for hydrological modelling, to increase the forecast skill on the medium range (>3 days) and more focus on education and training on the interpretation of forecasts. In light of limited resources, we suggest a simple model to classify the identified priorities in terms of their cost and complexity to decide in which order to tackle them. This model is then used to create an action plan of short-, medium- and long-term research priorities with the ultimate goal of an optimal improvement of EFAS in particular and to spur the development of operational HEPS in general.
Resumo:
A comparison of the point forecasts and the probability distributions of inflation and output growth made by individual respondents to the US Survey of Professional Forecasters indicates that the two sets of forecasts are sometimes inconsistent. We evaluate a number of possible explanations, and find that not all forecasters update their histogram forecasts as new information arrives. This is supported by the finding that the point forecasts are more accurate than the histograms in terms of first-moment prediction.
Resumo:
In recent years several methodologies have been developed to combine and interpret ensembles of climate models with the aim of quantifying uncertainties in climate projections. Constrained climate model forecasts have been generated by combining various choices of metrics used to weight individual ensemble members, with diverse approaches to sampling the ensemble. The forecasts obtained are often significantly different, even when based on the same model output. Therefore, a climate model forecast classification system can serve two roles: to provide a way for forecast producers to self-classify their forecasts; and to provide information on the methodological assumptions underlying the forecast generation and its uncertainty when forecasts are used for impacts studies. In this review we propose a possible classification system based on choices of metrics and sampling strategies. We illustrate the impact of some of the possible choices in the uncertainty quantification of large scale projections of temperature and precipitation changes, and briefly discuss possible connections between climate forecast uncertainty quantification and decision making approaches in the climate change context.
Resumo:
Existing empirical evidence has frequently observed that professional forecasters are conservative and display herding behaviour. Whilst a large number of papers have considered equities as well as macroeconomic series, few have considered the accuracy of forecasts in alternative asset classes such as real estate. We consider the accuracy of forecasts for the UK commercial real estate market over the period 1999-2011. The results illustrate that forecasters display a tendency to under-estimate growth rates during strong market conditions and over-estimate when the market is performing poorly. This conservatism not only results in smoothed estimates but also implies that forecasters display herding behaviour. There is also a marked difference in the relative accuracy of capital and total returns versus rental figures. Whilst rental growth forecasts are relatively accurate, considerable inaccuracy is observed with respect to capital value and total returns.
Resumo:
This study presents an approach to combine uncertainties of the hydrological model outputs predicted from a number of machine learning models. The machine learning based uncertainty prediction approach is very useful for estimation of hydrological models' uncertainty in particular hydro-metrological situation in real-time application [1]. In this approach the hydrological model realizations from Monte Carlo simulations are used to build different machine learning uncertainty models to predict uncertainty (quantiles of pdf) of the a deterministic output from hydrological model . Uncertainty models are trained using antecedent precipitation and streamflows as inputs. The trained models are then employed to predict the model output uncertainty which is specific for the new input data. We used three machine learning models namely artificial neural networks, model tree, locally weighted regression to predict output uncertainties. These three models produce similar verification results, which can be improved by merging their outputs dynamically. We propose an approach to form a committee of the three models to combine their outputs. The approach is applied to estimate uncertainty of streamflows simulation from a conceptual hydrological model in the Brue catchment in UK and the Bagmati catchment in Nepal. The verification results show that merged output is better than an individual model output. [1] D. L. Shrestha, N. Kayastha, and D. P. Solomatine, and R. Price. Encapsulation of parameteric uncertainty statistics by various predictive machine learning models: MLUE method, Journal of Hydroinformatic, in press, 2013.
Resumo:
A procedure for characterizing global uncertainty of a rainfall-runoff simulation model based on using grey numbers is presented. By using the grey numbers technique the uncertainty is characterized by an interval; once the parameters of the rainfall-runoff model have been properly defined as grey numbers, by using the grey mathematics and functions it is possible to obtain simulated discharges in the form of grey numbers whose envelope defines a band which represents the vagueness/uncertainty associated with the simulated variable. The grey numbers representing the model parameters are estimated in such a way that the band obtained from the envelope of simulated grey discharges includes an assigned percentage of observed discharge values and is at the same time as narrow as possible. The approach is applied to a real case study highlighting that a rigorous application of the procedure for direct simulation through the rainfall-runoff model with grey parameters involves long computational times. However, these times can be significantly reduced using a simplified computing procedure with minimal approximations in the quantification of the grey numbers representing the simulated discharges. Relying on this simplified procedure, the conceptual rainfall-runoff grey model is thus calibrated and the uncertainty bands obtained both downstream of the calibration process and downstream of the validation process are compared with those obtained by using a well-established approach, like the GLUE approach, for characterizing uncertainty. The results of the comparison show that the proposed approach may represent a valid tool for characterizing the global uncertainty associable with the output of a rainfall-runoff simulation model.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Categorical data cannot be interpolated directly because they are outcomes of discrete random variables. Thus, types of categorical variables are transformed into indicator functions that can be handled by interpolation methods. Interpolated indicator values are then backtransformed to the original types of categorical variables. However, aspects such as variability and uncertainty of interpolated values of categorical data have never been considered. In this paper we show that the interpolation variance can be used to map an uncertainty zone around boundaries between types of categorical variables. Moreover, it is shown that the interpolation variance is a component of the total variance of the categorical variables, as measured by the coefficient of unalikeability. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
To check the effectiveness of campaigns preventing drug abuse or indicating local effects of efforts against drug trafficking, it is beneficial to know consumed amounts of substances in a high spatial and temporal resolution. The analysis of drugs of abuse in wastewater (WW) has the potential to provide this information. In this study, the reliability of WW drug consumption estimates is assessed and a novel method presented to calculate the total uncertainty in observed WW cocaine (COC) and benzoylecgonine (BE) loads. Specifically, uncertainties resulting from discharge measurements, chemical analysis and the applied sampling scheme were addressed and three approaches presented. These consist of (i) a generic model-based procedure to investigate the influence of the sampling scheme on the uncertainty of observed or expected drug loads, (ii) a comparative analysis of two analytical methods (high performance liquid chromatography-tandem mass spectrometry and gas chromatography-mass spectrometry), including an extended cross-validation by influent profiling over several days, and (iii) monitoring COC and BE concentrations in WW of the largest Swiss sewage treatment plants. In addition, the COC and BE loads observed in the sewage treatment plant of the city of Berne were used to back-calculate the COC consumption. The estimated mean daily consumed amount was 107 ± 21 g of pure COC, corresponding to 321 g of street-grade COC.
Resumo:
Researchers in ecology commonly use multivariate analyses (e.g. redundancy analysis, canonical correspondence analysis, Mantel correlation, multivariate analysis of variance) to interpret patterns in biological data and relate these patterns to environmental predictors. There has been, however, little recognition of the errors associated with biological data and the influence that these may have on predictions derived from ecological hypotheses. We present a permutational method that assesses the effects of taxonomic uncertainty on the multivariate analyses typically used in the analysis of ecological data. The procedure is based on iterative randomizations that randomly re-assign non identified species in each site to any of the other species found in the remaining sites. After each re-assignment of species identities, the multivariate method at stake is run and a parameter of interest is calculated. Consequently, one can estimate a range of plausible values for the parameter of interest under different scenarios of re-assigned species identities. We demonstrate the use of our approach in the calculation of two parameters with an example involving tropical tree species from western Amazonia: 1) the Mantel correlation between compositional similarity and environmental distances between pairs of sites, and; 2) the variance explained by environmental predictors in redundancy analysis (RDA). We also investigated the effects of increasing taxonomic uncertainty (i.e. number of unidentified species), and the taxonomic resolution at which morphospecies are determined (genus-resolution, family-resolution, or fully undetermined species) on the uncertainty range of these parameters. To achieve this, we performed simulations on a tree dataset from southern Mexico by randomly selecting a portion of the species contained in the dataset and classifying them as unidentified at each level of decreasing taxonomic resolution. An analysis of covariance showed that both taxonomic uncertainty and resolution significantly influence the uncertainty range of the resulting parameters. Increasing taxonomic uncertainty expands our uncertainty of the parameters estimated both in the Mantel test and RDA. The effects of increasing taxonomic resolution, however, are not as evident. The method presented in this study improves the traditional approaches to study compositional change in ecological communities by accounting for some of the uncertainty inherent to biological data. We hope that this approach can be routinely used to estimate any parameter of interest obtained from compositional data tables when faced with taxonomic uncertainty.
Resumo:
The verification of compliance with a design specification in manufacturing requires the use of metrological instruments to check if the magnitude associated with the design specification is or not according with tolerance range. Such instrumentation and their use during the measurement process, has associated an uncertainty of measurement whose value must be related to the value of tolerance tested. Most papers dealing jointly tolerance and measurement uncertainties are mainly focused on the establishment of a relationship uncertainty-tolerance without paying much attention to the impact from the standpoint of process cost. This paper analyzes the cost-measurement uncertainty, considering uncertainty as a productive factor in the process outcome. This is done starting from a cost-tolerance model associated with the process. By means of this model the existence of a measurement uncertainty is calculated in quantitative terms of cost and its impact on the process is analyzed.
Resumo:
An analytical method for evaluating the uncertainty of the performance of active antenna arrays in the whole spatial spectrum is presented. Since array processing algorithms based on spatial reference are widely used to track moving targets, it is essential to be aware of the impact of the uncertainty sources on the antenna response. Furthermore, the estimation of the direction of arrival (DOA) depends on the array uncertainty. The aim of the uncertainties analysis is to provide an exhaustive characterization of the behavior of the active antenna array associated with its main uncertainty sources. The result of this analysis helps to select the proper calibration technique to be implemented. An illustrative example for a triangular antenna array used for satellite tracking is presented showing the suitability of the proposed method to carry out an efficient characterization of an active antenna array.
Resumo:
Purpose: This paper aims to describe an investigation into how company performance can be improved by integrating internal and external customers and technology. The approach was developed, implemented and evaluated in the operations of the building components industry. The research was carried out in the precast concrete division of a Singapore company. Design/methodology/ approach: For the purpose of undertaking the investigation an exploratory case study approach was used. This was divided into conceptual and action research stages. The action research was also used to implement the changes in the company. Questionnaire surveys were carried out among company employees and external customers to assess the effect of these changes. Results of the investigation were derived using content and statistical analysis. Triangulation between three sources was used for validating the data. Findings: The exploratory case study strategy resulted in rich research data, which provided evidence of the changes taking place and integration happening, leading to improved performance. The action research approach proved a powerful tool where the uncertainty of outcomes makes it near impossible to make accurate forecasts. Another output of the research was the development of an "integrated customer orientation" (ICO) model. Research limitations/implications: The research in this paper used a single site action research investigation so should be interpreted within the specific company and industry context. There are implications for theory and practice in a number of areas of production and marketing as well as contributions to understanding about productivity improvement and organisational development. The investigation also fulfils the dual objectives of action research by contributing to both knowledge and practice. Originality/value: The paper describes a unique approach towards improving productivity, quality and service through the use of action research to implement changes, as well as providing the research evidence to evaluate both the process of implementation and results achieved. © Emerald Group Publishing Limited.