177 resultados para Currency forecast errors
Resumo:
The latest coupled configuration of the Met Office Unified Model (Global Coupled configuration 2, GC2) is presented. This paper documents the model components which make up the configuration (although the scientific description of these components is detailed elsewhere) and provides a description of the coupling between the components. The performance of GC2 in terms of its systematic errors is assessed using a variety of diagnostic techniques. The configuration is intended to be used by the Met Office and collaborating institutes across a range of timescales, with the seasonal forecast system (GloSea5) and climate projection system (HadGEM) being the initial users. In this paper GC2 is compared against the model currently used operationally in those two systems. Overall GC2 is shown to be an improvement on the configurations used currently, particularly in terms of modes of variability (e.g. mid-latitude and tropical cyclone intensities, the Madden–Julian Oscillation and El Niño Southern Oscillation). A number of outstanding errors are identified with the most significant being a considerable warm bias over the Southern Ocean and a dry precipitation bias in the Indian and West African summer monsoons. Research to address these is ongoing.
Resumo:
The decision to close airspace in the event of a volcanic eruption is based on hazard maps of predicted ash extent. These are produced using output from volcanic ash transport and dispersion (VATD)models. In this paper an objectivemetric to evaluate the spatial accuracy of VATD simulations relative to satellite retrievals of volcanic ash is presented. The 5 metric is based on the fractions skill score (FSS). Thismeasure of skill provides more information than traditional point-bypoint metrics, such as success index and Pearson correlation coefficient, as it takes into the account spatial scale overwhich skill is being assessed. The FSS determines the scale overwhich a simulation has skill and can differentiate between a "near miss" and a forecast that is badly misplaced. The 10 idealised scenarios presented show that even simulations with considerable displacement errors have useful skill when evaluated over neighbourhood scales of 200–700km2. This method could be used to compare forecasts produced by different VATDs or using different model parameters, assess the impact of assimilating satellite retrieved ash data and evaluate VATD forecasts over a long time period.
Resumo:
Contemporary research in generative second language (L2) acquisition has attempted to address observable target-deviant aspects of L2 grammars within a UG-continuity framework (e.g. Lardiere 2000; Schwartz 2003; Sprouse 2004; Prévost & White 1999, 2000). With the aforementioned in mind, the independence of pragmatic and syntactic development, independently observed elsewhere (e.g. Grodzinsky & Reinhart 1993; Lust et al. 1986; Pacheco & Flynn 2005; Serratrice, Sorace & Paoli 2004), becomes particularly interesting. In what follows, I examine the resetting of the Null-Subject Parameter (NSP) for English learners of L2 Spanish. I argue that insensitivity to associated discoursepragmatic constraints on the discursive distribution of overt/null subjects accounts for what appear to be particular errors as a result of syntactic deficits. It is demonstrated that despite target-deviant performance, the majority must have native-like syntactic competence given their knowledge of the Overt Pronoun Constraint (Montalbetti 1984), a principle associated with the Spanish-type setting of the NSP.
Resumo:
There remains large disagreement between ice-water path (IWP) in observational data sets, largely because the sensors observe different parts of the ice particle size distribution. A detailed comparison of retrieved IWP from satellite observations in the Tropics (!30 " latitude) in 2007 was made using collocated measurements. The radio detection and ranging(radar)/light detection and ranging (lidar) (DARDAR) IWP data set, based on combined radar/lidar measurements, is used as a reference because it provides arguably the best estimate of the total column IWP. For each data set, usable IWP dynamic ranges are inferred from this comparison. IWP retrievals based on solar reflectance measurements, in the moderate resolution imaging spectroradiometer (MODIS), advanced very high resolution radiometer–based Climate Monitoring Satellite Applications Facility (CMSAF), and Pathfinder Atmospheres-Extended (PATMOS-x) datasets, were found to be correlated with DARDAR over a large IWP range (~20–7000 g m -2 ). The random errors of the collocated data sets have a close to lognormal distribution, and the combined random error of MODIS and DARDAR is less than a factor of 2, which also sets the upper limit for MODIS alone. In the same way, the upper limit for the random error of all considered data sets is determined. Data sets based on passive microwave measurements, microwave surface and precipitation products system (MSPPS), microwave integrated retrieval system (MiRS), and collocated microwave only (CMO), are largely correlated with DARDAR for IWP values larger than approximately 700 g m -2 . The combined uncertainty between these data sets and DARDAR in this range is slightly less MODIS-DARDAR, but the systematic bias is nearly an order of magnitude.
Resumo:
With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.
Enhanced long-range forecast skill in boreal winter following stratospheric strong vortex conditions
Resumo:
There has been a great deal of recent interest in producing weather forecasts on the 2–6 week sub-seasonal timescale, which bridges the gap between medium-range (0–10 day) and seasonal (3–6 month) forecasts. While much of this interest is focused on the potential applications of skilful forecasts on the sub-seasonal range, understanding the potential sources of sub-seasonal forecast skill is a challenging and interesting problem, particularly because of the likely state-dependence of this skill (Hudson et al 2011). One such potential source of state-dependent skill for the Northern Hemisphere in winter is the occurrence of stratospheric sudden warming (SSW) events (Sigmond et al 2013). Here we show, by analysing a set of sub-seasonal hindcasts, that there is enhanced predictability of surface circulation not only when the stratospheric vortex is anomalously weak following SSWs but also when the vortex is extremely strong. Sub-seasonal forecasts initialized during strong vortex events are able to successfully capture the associated surface temperature and circulation anomalies. This results in an enhancement of Northern annular mode forecast skill compared to forecasts initialized during the cases when the stratospheric state is close to climatology. We demonstrate that the enhancement of skill for forecasts initialized during periods of strong vortex conditions is comparable to that achieved for forecasts initialized during weak events. This result indicates that additional confidence can be placed in sub-seasonal forecasts when the stratospheric polar vortex is significantly disturbed from its normal state.
Resumo:
TIGGE was a major component of the THORPEX (The Observing System Research and Predictability Experiment) research program, whose aim is to accelerate improvements in forecasting high-impact weather. By providing ensemble prediction data from leading operational forecast centers, TIGGE has enhanced collaboration between the research and operational meteorological communities and enabled research studies on a wide range of topics. The paper covers the objective evaluation of the TIGGE data. For a range of forecast parameters, it is shown to be beneficial to combine ensembles from several data providers in a Multi-model Grand Ensemble. Alternative methods to correct systematic errors, including the use of reforecast data, are also discussed. TIGGE data have been used for a range of research studies on predictability and dynamical processes. Tropical cyclones are the most destructive weather systems in the world, and are a focus of multi-model ensemble research. Their extra-tropical transition also has a major impact on skill of mid-latitude forecasts. We also review how TIGGE has added to our understanding of the dynamics of extra-tropical cyclones and storm tracks. Although TIGGE is a research project, it has proved invaluable for the development of products for future operational forecasting. Examples include the forecasting of tropical cyclone tracks, heavy rainfall, strong winds, and flood prediction through coupling hydrological models to ensembles. Finally the paper considers the legacy of TIGGE. We discuss the priorities and key issues in predictability and ensemble forecasting, including the new opportunities of convective-scale ensembles, links with ensemble data assimilation methods, and extension of the range of useful forecast skill.
Resumo:
Inspired by the commercial desires of global brands and retailers to access the lucrative green consumer market, carbon is increasingly being counted and made knowable at the mundane sites of everyday production and consumption, from the carbon footprint of a plastic kitchen fork to that of an online bank account. Despite the challenges of counting and making commensurable the global warming impact of a myriad of biophysical and societal activities, this desire to communicate a product or service's carbon footprint has sparked complicated carbon calculative practices and enrolled actors at literally every node of multi-scaled and vastly complex global supply chains. Against this landscape, this paper critically analyzes the counting practices that create the ‘e’ in ‘CO2e’. It is shown that, central to these practices are a series of tools, models and databases which, in building upon previous work (Eden, 2012 and Star and Griesemer, 1989) we conceptualize here as ‘boundary objects’. By enrolling everyday actors from farmers to consumers, these objects abstract and stabilize greenhouse gas emissions from their messy material and social contexts into units of CO2e which can then be translated along a product's supply chain, thereby establishing a new currency of ‘everyday supply chain carbon’. However, in making all greenhouse gas-related practices commensurable and in enrolling and stabilizing the transfer of information between multiple actors these objects oversee a process of simplification reliant upon, and subject to, a multiplicity of approximations, assumptions, errors, discrepancies and/or omissions. Further the outcomes of these tools are subject to the politicized and commercial agendas of the worlds they attempt to link, with each boundary actor inscribing different meanings to a product's carbon footprint in accordance with their specific subjectivities, commercial desires and epistemic framings. It is therefore shown that how a boundary object transforms greenhouse gas emissions into units of CO2e, is the outcome of distinct ideologies regarding ‘what’ a product's carbon footprint is and how it should be made legible. These politicized decisions, in turn, inform specific reduction activities and ultimately advance distinct, specific and increasingly durable transition pathways to a low carbon society.
Resumo:
Using an international, multi-model suite of historical forecasts from the World Climate Research Programme (WCRP) Climate-system Historical Forecast Project (CHFP), we compare the seasonal prediction skill in boreal wintertime between models that resolve the stratosphere and its dynamics (“high-top”) and models that do not (“low-top”). We evaluate hindcasts that are initialized in November, and examine the model biases in the stratosphere and how they relate to boreal wintertime (Dec-Mar) seasonal forecast skill. We are unable to detect more skill in the high-top ensemble-mean than the low-top ensemble-mean in forecasting the wintertime North Atlantic Oscillation, but model performance varies widely. Increasing the ensemble size clearly increases the skill for a given model. We then examine two major processes involving stratosphere-troposphere interactions (the El Niño-Southern Oscillation/ENSO and the Quasi-biennial Oscillation/QBO) and how they relate to predictive skill on intra-seasonal to seasonal timescales, particularly over the North Atlantic and Eurasia regions. High-top models tend to have a more realistic stratospheric response to El Niño and the QBO compared to low-top models. Enhanced conditional wintertime skill over high-latitudes and the North Atlantic region during winters with El Niño conditions suggests a possible role for a stratospheric pathway.
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyja allajökull eruption during the period from 7 to 16 May 2010. We find that at the start of this period, 7–10 May, the model (FLEXible PARTicle) has excellent skill and can predict the spatial distribution of the satellite-retrieved ash to within 0.5∘ × 0.5∘ latitude/longitude. However, on 10 May there is a decrease in the spatial accuracy of the model to 2.5∘× 2.5∘ latitude/longitude, and between 11 and 12 May the simulated ash location errors grow rapidly. On 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to key points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty, and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.
Resumo:
Probabilistic hydro-meteorological forecasts have over the last decades been used more frequently to communicate forecastuncertainty. This uncertainty is twofold, as it constitutes both an added value and a challenge for the forecaster and the user of the forecasts. Many authors have demonstrated the added (economic) value of probabilistic over deterministic forecasts across the water sector (e.g. flood protection, hydroelectric power management and navigation). However, the richness of the information is also a source of challenges for operational uses, due partially to the difficulty to transform the probability of occurrence of an event into a binary decision. This paper presents the results of a risk-based decision-making game on the topic of flood protection mitigation, called “How much are you prepared to pay for a forecast?”. The game was played at several workshops in 2015, which were attended by operational forecasters and academics working in the field of hydrometeorology. The aim of this game was to better understand the role of probabilistic forecasts in decision-making processes and their perceived value by decision-makers. Based on the participants’ willingness-to-pay for a forecast, the results of the game show that the value (or the usefulness) of a forecast depends on several factors, including the way users perceive the quality of their forecasts and link it to the perception of their own performances as decision-makers.