2 resultados para Probabilistic logic
em eResearch Archive - Queensland Department of Agriculture
Resumo:
Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.
Resumo:
The amount and timing of early wet-season rainfall are important for the management of many agricultural industries in north Australia. With this in mind, a wet-season onset date is defined based on the accumulation of rainfall to a predefined threshold, starting from 1 September, for each square of a 1° gridded analysis of daily rainfall across the region. Consistent with earlier studies, the interannual variability of the onset dates is shown to be well related to the immediately preceding July-August Southern Oscillation index (SOI). Based on this relationship, a forecast method using logistic regression is developed to predict the probability that onset will occur later than the climatological mean date. This method is expanded to also predict the probabilities that onset will be later than any of a range of threshold dates around the climatological mean. When assessed using cross-validated hindcasts, the skill of the predictions exceeds that of climatological forecasts in the majority of locations in north Australia, especially in the Top End region, Cape York, and central Queensland. At times of strong anomalies in the July-August SOI, the forecasts are reliably emphatic. Furthermore, predictions using tropical Pacific sea surface temperatures (SSTs) as the predictor are also tested. While short-lead (July-August predictor) forecasts are more skillful using the SOI, long-lead (May-June predictor) forecasts are more skillful using Pacific SSTs, indicative of the longer-term memory present in the ocean.