34 resultados para Telephone Reliability
Resumo:
Reliability analysis of probabilistic forecasts, in particular through the rank histogram or Talagrand diagram, is revisited. Two shortcomings are pointed out: Firstly, a uniform rank histogram is but a necessary condition for reliability. Secondly, if the forecast is assumed to be reliable, an indication is needed how far a histogram is expected to deviate from uniformity merely due to randomness. Concerning the first shortcoming, it is suggested that forecasts be grouped or stratified along suitable criteria, and that reliability is analyzed individually for each forecast stratum. A reliable forecast should have uniform histograms for all individual forecast strata, not only for all forecasts as a whole. As to the second shortcoming, instead of the observed frequencies, the probability of the observed frequency is plotted, providing and indication of the likelihood of the result under the hypothesis that the forecast is reliable. Furthermore, a Goodness-Of-Fit statistic is discussed which is essentially the reliability term of the Ignorance score. The discussed tools are applied to medium range forecasts for 2 m-temperature anomalies at several locations and lead times. The forecasts are stratified along the expected ranked probability score. Those forecasts which feature a high expected score turn out to be particularly unreliable.
Resumo:
Scoring rules are an important tool for evaluating the performance of probabilistic forecasting schemes. A scoring rule is called strictly proper if its expectation is optimal if and only if the forecast probability represents the true distribution of the target. In the binary case, strictly proper scoring rules allow for a decomposition into terms related to the resolution and the reliability of a forecast. This fact is particularly well known for the Brier Score. In this article, this result is extended to forecasts for finite-valued targets. Both resolution and reliability are shown to have a positive effect on the score. It is demonstrated that resolution and reliability are directly related to forecast attributes that are desirable on grounds independent of the notion of scores. This finding can be considered an epistemological justification of measuring forecast quality by proper scoring rules. A link is provided to the original work of DeGroot and Fienberg, extending their concepts of sufficiency and refinement. The relation to the conjectured sharpness principle of Gneiting, et al., is elucidated.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
A necessary condition for a good probabilistic forecast is that the forecast system is shown to be reliable: forecast probabilities should equal observed probabilities verified over a large number of cases. As climate change trends are now emerging from the natural variability, we can apply this concept to climate predictions and compute the reliability of simulated local and regional temperature and precipitation trends (1950–2011) in a recent multi-model ensemble of climate model simulations prepared for the Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5). With only a single verification time, the verification is over the spatial dimension. The local temperature trends appear to be reliable. However, when the global mean climate response is factored out, the ensemble is overconfident: the observed trend is outside the range of modelled trends in many more regions than would be expected by the model estimate of natural variability and model spread. Precipitation trends are overconfident for all trend definitions. This implies that for near-term local climate forecasts the CMIP5 ensemble cannot simply be used as a reliable probabilistic forecast.
Resumo:
Modern transaction cost economics (TCE) thinking has developed into a key intellectual foundation of international business (IB) research, but the Williamsonian version has faced substantial criticism for adopting the behavioral assumption of opportunism. In this paper we assess both the opportunism concept and existing alternatives such as trust within the context of IB research, especially work on multinational enterprise (MNE) governance. Case analyses of nine global MNEs illustrate an alternative to the opportunism assumption that captures more fully the mechanisms underlying failed commitments inside the MNE. As a substitute for the often-criticized assumption of opportunism, we propose the envelope concept of bounded reliability (BRel), an assumption that represents more accurately and more completely the reasons for failed commitments, without invalidating the other critical assumption in conventional TCE (and internalization theory) thinking, namely the widely accepted envelope concept of bounded rationality (BRat). Bounded reliability as an envelope concept includes two main components, within the context of global MNE management: opportunism as intentional deceit, and benevolent preference reversal. The implications for IB research of adopting the bounded reliability concept are far reaching, as this concept may increase the legitimacy of comparative institutional analysis in the social sciences.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.
Resumo:
Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.Useful probabilistic climate forecasts on decadal timescales should be reliable (i.e. forecast probabilities match the observed relative frequencies) but this is seldom examined. This paper assesses a necessary condition for reliability, that the ratio of ensemble spread to forecast error being close to one, for seasonal to decadal sea surface temperature retrospective forecasts from the Met Office Decadal Prediction System (DePreSys). Factors which may affect reliability are diagnosed by comparing this spread-error ratio for an initial condition ensemble and two perturbed physics ensembles for initialized and uninitialized predictions. At lead times less than 2 years, the initialized ensembles tend to be under-dispersed, and hence produce overconfident and hence unreliable forecasts. For longer lead times, all three ensembles are predominantly over-dispersed. Such over-dispersion is primarily related to excessive inter-annual variability in the climate model. These findings highlight the need to carefully evaluate simulated variability in seasonal and decadal prediction systems.
Resumo:
Low-power medium access control (MAC) protocols used for communication of energy constraint wireless embedded devices do not cope well with situations where transmission channels are highly erroneous. Existing MAC protocols discard corrupted messages which lead to costly retransmissions. To improve transmission performance, it is possible to include an error correction scheme and transmit/receive diversity. It is possible to add redundant information to transmitted packets in order to recover data from corrupted packets. It is also possible to make use of transmit/receive diversity via multiple antennas to improve error resiliency of transmissions. Both schemes may be used in conjunction to further improve the performance. In this study, the authors show how an error correction scheme and transmit/receive diversity can be integrated in low-power MAC protocols. Furthermore, the authors investigate the achievable performance gains of both methods. This is important as both methods have associated costs (processing requirements; additional antennas and power) and for a given communication situation it must be decided which methods should be employed. The authors’ results show that, in many practical situations, error control coding outperforms transmission diversity; however, if very high reliability is required, it is useful to employ both schemes together.
Resumo:
Embedded computer systems equipped with wireless communication transceivers are nowadays used in a vast number of application scenarios. Energy consumption is important in many of these scenarios, as systems are battery operated and long maintenance-free operation is required. To achieve this goal, embedded systems employ low-power communication transceivers and protocols. However, currently used protocols cannot operate efficiently when communication channels are highly erroneous. In this study, we show how average diversity combining (ADC) can be used in state-of-the-art low-power communication protocols. This novel approach improves transmission reliability and in consequence energy consumption and transmission latency in the presence of erroneous channels. Using a testbed, we show that highly erroneous channels are indeed a common occurrence in situations, where low-power systems are used and we demonstrate that ADC improves low-power communication dramatically.