965 resultados para PREDICTIONS
Resumo:
This paper highlights some communicative and institutional challenges to using ensemble prediction systems (EPS) in operational flood forecasting, warning, and civil protection. Focusing in particular on the Swedish experience, as part of the PREVIEW FP6 project, of applying EPS to operational flood forecasting, the paper draws on a wider set of site visits, interviews, and participant observation with flood forecasting centres and civil protection authorities (CPAs) in Sweden and 15 other European states to reflect on the comparative success of Sweden in enabling CPAs to make operational use of EPS for flood risk management. From that experience, the paper identifies four broader lessons for other countries interested in developing the operational capacity to make, communicate, and use EPS for flood forecasting and civil protection. We conclude that effective training and clear communication of EPS, while clearly necessary, are by no means sufficient to ensure effective use of EPS. Attention must also be given to overcoming the institutional obstacles to their use and to identifying operational choices for which EPS is seen to add value rather than uncertainty to operational decision making by CPAs.
Resumo:
We consider whether survey respondents’ probability distributions, reported as histograms, provide reliable and coherent point predictions, when viewed through the lens of a Bayesian learning model. We argue that a role remains for eliciting directly-reported point predictions in surveys of professional forecasters.
The EAP teacher: prophet of doom or eternal optimist? EAP teachers' predictions of students' success
Resumo:
In the 1960s North Atlantic sea surface temperatures (SST) cooled rapidly. The magnitude of the cooling was largest in the North Atlantic subpolar gyre (SPG), and was coincident with a rapid freshening of the SPG. Here we analyze hindcasts of the 1960s North Atlantic cooling made with the UK Met Office’s decadal prediction system (DePreSys), which is initialised using observations. It is shown that DePreSys captures—with a lead time of several years—the observed cooling and freshening of the North Atlantic SPG. DePreSys also captures changes in SST over the wider North Atlantic and surface climate impacts over the wider region, such as changes in atmospheric circulation in winter and sea ice extent. We show that initialisation of an anomalously weak Atlantic Meridional Overturning Circulation (AMOC), and hence weak northward heat transport, is crucial for DePreSys to predict the magnitude of the observed cooling. Such an anomalously weak AMOC is not captured when ocean observations are not assimilated (i.e. it is not a forced response in this model). The freshening of the SPG is also dominated by ocean salt transport changes in DePreSys; in particular, the simulation of advective freshwater anomalies analogous to the Great Salinity Anomaly were key. Therefore, DePreSys suggests that ocean dynamics played an important role in the cooling of the North Atlantic in the 1960s, and that this event was predictable.
Resumo:
Decadal climate predictions exhibit large biases, which are often subtracted and forgotten. However, understanding the causes of bias is essential to guide efforts to improve prediction systems, and may offer additional benefits. Here the origins of biases in decadal predictions are investigated, including whether analysis of these biases might provide useful information. The focus is especially on the lead-time-dependent bias tendency. A “toy” model of a prediction system is initially developed and used to show that there are several distinct contributions to bias tendency. Contributions from sampling of internal variability and a start-time-dependent forcing bias can be estimated and removed to obtain a much improved estimate of the true bias tendency, which can provide information about errors in the underlying model and/or errors in the specification of forcings. It is argued that the true bias tendency, not the total bias tendency, should be used to adjust decadal forecasts. The methods developed are applied to decadal hindcasts of global mean temperature made using the Hadley Centre Coupled Model, version 3 (HadCM3), climate model, and it is found that this model exhibits a small positive bias tendency in the ensemble mean. When considering different model versions, it is shown that the true bias tendency is very highly correlated with both the transient climate response (TCR) and non–greenhouse gas forcing trends, and can therefore be used to obtain observationally constrained estimates of these relevant physical quantities.
Resumo:
Although over a hundred thermal indices can be used for assessing thermal health hazards, many ignore the human heat budget, physiology and clothing. The Universal Thermal Climate Index (UTCI) addresses these shortcomings by using an advanced thermo-physiological model. This paper assesses the potential of using the UTCI for forecasting thermal health hazards. Traditionally, such hazard forecasting has had two further limitations: it has been narrowly focused on a particular region or nation and has relied on the use of single ‘deterministic’ forecasts. Here, the UTCI is computed on a global scale,which is essential for international health-hazard warnings and disaster preparedness, and it is provided as a probabilistic forecast. It is shown that probabilistic UTCI forecasts are superior in skill to deterministic forecasts and that despite global variations, the UTCI forecast is skilful for lead times up to 10 days. The paper also demonstrates the utility of probabilistic UTCI forecasts on the example of the 2010 heat wave in Russia.
Resumo:
We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.
Resumo:
Numerical simulations are presented of the ion distribution functions seen by middle-altitude spacecraft in the low-latitude boundary layer (LLBL) and cusp regions when reconnection is, or has recently been, taking place at the equatorial magnetopause. From the evolution of the distribution function with time elapsed since the field line was opened, both the observed energy/observation-time and pitch-angle/energy dispersions are well reproduced. Distribution functions showing a mixture of magnetosheath and magnetospheric ions, often thought to be a signature of the LLBL, are found on newly opened field lines as a natural consequence of the magnetopause effects on the ions and their flight times. In addition, it is shown that the extent of the source region of the magnetosheath ions that are detected by a satellite is a function of the sensitivity of the ion instrument . If the instrument one-count level is high (and/or solar-wind densities are low), the cusp ion precipitation detected comes from a localised region of the mid-latitude magnetopause (around the magnetic cusp), even though the reconnection takes place at the equatorial magnetopause. However, if the instrument sensitivity is high enough, then ions injected from a large segment of the dayside magnetosphere (in the relevant hemisphere) will be detected in the cusp. Ion precipitation classed as LLBL is shown to arise from the low-latitude magnetopause, irrespective of the instrument sensitivity. Adoption of threshold flux definitions has the same effect as instrument sensitivity in artificially restricting the apparent source region.
Resumo:
The recent identification of non-thermal plasmas using EISCAT data has been made possible by their occurrence during large, short-lived flow bursts. For steady, yet rapid, ion convection the only available signature is the shape of the spectrum, which is unreliable because it is open to distortion by noise and sampling uncertainty and can be mimicked by other phenomena. Nevertheless, spectral shape does give an indication of the presence of non-thermal plasma, and the characteristic shape has been observed for long periods (of the order of an hour or more) in some experiments. To evaluate this type of event properly one needs to compare it to what would be expected theoretically. Predictions have been made using the coupled thermosphere-ionosphere model developed at University College London and the University of Sheffield to show where and when non-Maxwellian plasmas would be expected in the auroral zone. Geometrical and other factors then govern whether these are detectable by radar. The results are applicable to any incoherent scatter radar in this area, but the work presented here concentrates on predictions with regard to experiments on the EISCAT facility.
Resumo:
The quantification of uncertainty is an increasingly popular topic, with clear importance for climate change policy. However, uncertainty assessments are open to a range of interpretations, each of which may lead to a different policy recommendation. In the EQUIP project researchers from the UK climate modelling, statistical modelling, and impacts communities worked together on ‘end-to-end’ uncertainty assessments of climate change and its impacts. Here, we use an experiment in peer review amongst project members to assess variation in the assessment of uncertainties between EQUIP researchers. We find overall agreement on key sources of uncertainty but a large variation in the assessment of the methods used for uncertainty assessment. Results show that communication aimed at specialists makes the methods used harder to assess. There is also evidence of individual bias, which is partially attributable to disciplinary backgrounds. However, varying views on the methods used to quantify uncertainty did not preclude consensus on the consequential results produced using those methods. Based on our analysis, we make recommendations for developing and presenting statements on climate and its impacts. These include the use of a common uncertainty reporting format in order to make assumptions clear; presentation of results in terms of processes and trade-offs rather than only numerical ranges; and reporting multiple assessments of uncertainty in order to elucidate a more complete picture of impacts and their uncertainties. This in turn implies research should be done by teams of people with a range of backgrounds and time for interaction and discussion, with fewer but more comprehensive outputs in which the range of opinions is recorded.
Resumo:
Incorporating a prediction into future planning and decision making is advisable only if we have judged the prediction’s credibility. This is notoriously difficult and controversial in the case of predictions of future climate. By reviewing epistemic arguments about climate model performance, we discuss how to make and justify judgments about the credibility of climate predictions. We propose a new bounding argument that justifies basing such judgments on the past performance of possibly dissimilar prediction problems. This encourages a more explicit use of data in making quantitative judgments about the credibility of future climate predictions, and in training users of climate predictions to become better judges of credibility. We illustrate the approach using decadal predictions of annual mean, global mean surface air temperature.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. Weather forecasts using multiple NWPs from various weather centres implemented on catchment hydrology can provide significantly improved early flood warning. The availability of global ensemble weather prediction systems through the ‘THORPEX Interactive Grand Global Ensemble’ (TIGGE) offers a new opportunity for the development of state-of-the-art early flood forecasting systems. This paper presents a case study using the TIGGE database for flood warning on a meso-scale catchment (4062 km2) located in the Midlands region of England. For the first time, a research attempt is made to set up a coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts. A probabilistic discharge and flood inundation forecast is provided as the end product to study the potential benefits of using the TIGGE database. The study shows that precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial precipitation variability on such a comparatively small catchment, which indicates need to improve NWPs resolution and/or disaggregating techniques to narrow down the spatial gap between meteorology and hydrology. The spread of discharge forecasts varies from centre to centre, but it is generally large and implies a significant level of uncertainties. Nevertheless, the results show the TIGGE database is a promising tool to forecast flood inundation, comparable with that driven by raingauge observation.
Validation of a priori CME arrival predictions made using real-time heliospheric imager observations
Resumo:
Between December 2010 and March 2013, volunteers for the Solar Stormwatch (SSW) Citizen Science project have identified and analyzed coronal mass ejections (CMEs) in the near real-time Solar Terrestrial Relations Observatory Heliospheric Imager observations, in order to make “Fearless Forecasts” of CME arrival times and speeds at Earth. Of the 60 predictions of Earth-directed CMEs, 20 resulted in an identifiable Interplanetary CME (ICME) at Earth within 1.5–6 days, with an average error in predicted transit time of 22 h, and average transit time of 82.3 h. The average error in predicting arrival speed is 151 km s−1, with an average arrival speed of 425km s−1. In the same time period, there were 44 CMEs for which there are no corresponding SSW predictions, and there were 600 days on which there was neither a CME predicted nor observed. A number of metrics show that the SSW predictions do have useful forecast skill; however, there is still much room for improvement. We investigate potential improvements by using SSW inputs in three models of ICME propagation: two of constant acceleration and one of aerodynamic drag. We find that taking account of interplanetary acceleration can improve the average errors of transit time to 19 h and arrival speed to 77 km s−1.
Resumo:
An ability to quantify the reliability of probabilistic flood inundation predictions is a requirement not only for guiding model development but also for their successful application. Probabilistic flood inundation predictions are usually produced by choosing a method of weighting the model parameter space, but previous study suggests that this choice leads to clear differences in inundation probabilities. This study aims to address the evaluation of the reliability of these probabilistic predictions. However, a lack of an adequate number of observations of flood inundation for a catchment limits the application of conventional methods of evaluating predictive reliability. Consequently, attempts have been made to assess the reliability of probabilistic predictions using multiple observations from a single flood event. Here, a LISFLOOD-FP hydraulic model of an extreme (>1 in 1000 years) flood event in Cockermouth, UK, is constructed and calibrated using multiple performance measures from both peak flood wrack mark data and aerial photography captured post-peak. These measures are used in weighting the parameter space to produce multiple probabilistic predictions for the event. Two methods of assessing the reliability of these probabilistic predictions using limited observations are utilized; an existing method assessing the binary pattern of flooding, and a method developed in this paper to assess predictions of water surface elevation. This study finds that the water surface elevation method has both a better diagnostic and discriminatory ability, but this result is likely to be sensitive to the unknown uncertainties in the upstream boundary condition