128 resultados para Propagation prediction models


Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is a growing interest in using stochastic parametrizations in numerical weather and climate prediction models. Previously, Palmer (2001) outlined the issues that give rise to the need for a stochastic parametrization and the forms such a parametrization could take. In this article a method is presented that uses a comparison between a standard-resolution version and a high-resolution version of the same model to gain information relevant for a stochastic parametrization in that model. A correction term that could be used in a stochastic parametrization is derived from the thermodynamic equations of both models. The origin of the components of this term is discussed. It is found that the component related to unresolved wave-wave interactions is important and can act to compensate for large parametrized tendencies. The correction term is not proportional to the parametrized tendency. Finally, it is explained how the correction term could be used to give information about the shape of the random distribution to be used in a stochastic parametrization. Copyright © 2009 Royal Meteorological Society

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud radar and lidar can be used to evaluate the skill of numerical weather prediction models in forecasting the timing and placement of clouds, but care must be taken in choosing the appropriate metric of skill to use due to the non- Gaussian nature of cloud-fraction distributions. We compare the properties of a number of different verification measures and conclude that of existing measures the Log of Odds Ratio is the most suitable for cloud fraction. We also propose a new measure, the Symmetric Extreme Dependency Score, which has very attractive properties, being equitable (for large samples), difficult to hedge and independent of the frequency of occurrence of the quantity being verified. We then use data from five European ground-based sites and seven forecast models, processed using the ‘Cloudnet’ analysis system, to investigate the dependence of forecast skill on cloud fraction threshold (for binary skill scores), height, horizontal scale and (for the Met Office and German Weather Service models) forecast lead time. The models are found to be least skillful at predicting the timing and placement of boundary-layer clouds and most skilful at predicting mid-level clouds, although in the latter case they tend to underestimate mean cloud fraction when cloud is present. It is found that skill decreases approximately inverse-exponentially with forecast lead time, enabling a forecast ‘half-life’ to be estimated. When considering the skill of instantaneous model snapshots, we find typical values ranging between 2.5 and 4.5 days. Copyright c 2009 Royal Meteorological Society

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Kriging interpolation method is combined with an object-based evaluation measure to assess the ability of the UK Met Office's dispersion and weather prediction models to predict the evolution of a plume of tracer as it was transported across Europe. The object-based evaluation method, SAL, considers aspects of the Structure, Amplitude and Location of the pollutant field. The SAL method is able to quantify errors in the predicted size and shape of the pollutant plume, through the structure component, the over- or under-prediction of the pollutant concentrations, through the amplitude component, and the position of the pollutant plume, through the location component. The quantitative results of the SAL evaluation are similar for both models and close to a subjective visual inspection of the predictions. A negative structure component for both models, throughout the entire 60 hour plume dispersion simulation, indicates that the modelled plumes are too small and/or too peaked compared to the observed plume at all times. The amplitude component for both models is strongly positive at the start of the simulation, indicating that surface concentrations are over-predicted by both models for the first 24 hours, but modelled concentrations are within a factor of 2 of the observations at later times. Finally, for both models, the location component is small for the first 48 hours after the start of the tracer release, indicating that the modelled plumes are situated close to the observed plume early on in the simulation, but this plume location error grows at later times. The SAL methodology has also been used to identify differences in the transport of pollution in the dispersion and weather prediction models. The convection scheme in the weather prediction model is found to transport more pollution vertically out of the boundary layer into the free troposphere than the dispersion model convection scheme resulting in lower pollutant concentrations near the surface and hence a better forecast for this case study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The atmospheric component of the United Kingdom’s new High-resolution Global Environmental Model (HiGEM) has been run with interactive aerosol schemes that include biomass burning and mineral dust. Dust emission, transport, and deposition are parameterized within the model using six particle size divisions, which are treated independently. The biomass is modeled in three nonindependent modes, and emissions are prescribed from an external dataset. The model is shown to produce realistic horizontal and vertical distributions of these aerosols for each season when compared with available satellite- and ground-based observations and with other models. Combined aerosol optical depths off the coast of North Africa exceed 0.5 both in boreal winter, when biomass is the main contributor, and also in summer, when the dust dominates. The model is capable of resolving smaller-scale features, such as dust storms emanating from the Bode´ le´ and Saharan regions of North Africa and the wintertime Bode´ le´ low-level jet. This is illustrated by February and July case studies, in which the diurnal cycles of model variables in relation to dust emission and transport are examined. The top-of-atmosphere annual mean radiative forcing of the dust is calculated and found to be globally quite small but locally very large, exceeding 20 W m22 over the Sahara, where inclusion of dust aerosol is shown to improve the model radiative balance. This work extends previous aerosol studies by combining complexity with increased global resolution and represents a step toward the next generation of models to investigate aerosol–climate interactions. 1. Introduction Accurate modeling of mineral dust is known to be important because of its radiative impact in both numerical weather prediction models (Milton et al. 2008; Haywood et

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In most near-infrared atmospheric windows, absorption of solar radiation is dominated by the water vapor self-continuum and yet there is a paucity of measurements in these windows. We report new laboratory measurements of the self-continuum absorption at temperatures between 293 and 472 K and pressures from 0.015 to 5 atm in four near-infrared windows between 1 and 4 m (10000-2500 cm-1); the measurements are made over a wider range of wavenumber, temperatures and pressures than any previous measurements. They show that the self-continuum in these windows is typically one order of magnitude stronger than given in representations of the continuum widely used in climate and weather prediction models. These results are also not consistent with current theories attributing the self continuum within windows to the far-wings of strong spectral lines in the nearby water vapor absorption bands; we suggest that they are more consistent with water dimers being the major contributor to the continuum. The calculated global-average clear-sky atmospheric absorption of solar radiation is increased by 0.75 W/m2 (which is about 1% of the total clear-sky absorption) by using these new measurements as compared to calculations with the MT_CKD-2.5 self-continuum model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sting jets are transient coherent mesoscale strong wind features that can cause damaging surface wind gusts in extratropical cyclones. Currently, we have only limited knowledge of their climatological characteristics. Numerical weather prediction models require enough resolution to represent slantwise motions with horizontal scales of tens of kilometres and vertical scales of just a few hundred metres to represent sting jets. Hence, the climatological characteristics of sting jets and the associated extratropical cyclones can not be determined by searching for sting jets in low-resolution datasets such as reanalyses. A diagnostic is presented and evaluated for the detection in low-resolution datasets of atmospheric regions from which sting jets may originate. Previous studies have shown that conditional symmetric instability (CSI) is present in all storms studied with sting jets, while other, rapidly developing storms of a similar character but no CSI do not develop sting jets. Therefore, we assume that the release of CSI is needed for sting jets to develop. While this instability will not be released in a physically realistic way in low-resolution models (and hence sting jets are unlikely to occur), it is hypothesized that the signature of this instability (combined with other criteria that restrict analysis to moist mid-tropospheric regions in the neighbourhood of a secondary cold front) can be used to identify cyclones in which sting jets occurred in reality. The diagnostic is evaluated, and appropriate parameter thresholds defined, by applying it to three case studies simulated using two resolutions (with CSI-release resolved in only the higher-resolution simulation).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the year 2007 a General Observation Period (GOP) has been performed within the German Priority Program on Quantitative Precipitation Forecasting (PQP). By optimizing the use of existing instrumentation a large data set of in-situ and remote sensing instruments with special focus on water cycle variables was gathered over the full year cycle. The area of interest covered central Europe with increasing focus towards the Black Forest where the Convective and Orographically-induced Precipitation Study (COPS) took place from June to August 2007. Thus the GOP includes a variety of precipitation systems in order to relate the COPS results to a larger spatial scale. For a timely use of the data, forecasts of the numerical weather prediction models COSMO-EU and COSMO-DE of the German Meteorological Service were tailored to match the observations and perform model evaluation in a near real-time environment. The ultimate goal is to identify and distinguish between different kinds of model deficits and to improve process understanding.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Our group considered the desirability of including representations of uncertainty in the development of parameterizations. (By ‘uncertainty’ here we mean the deviation of sub-grid scale fluxes or tendencies in any given model grid box from truth.) We unanimously agreed that the ECWMF should attempt to provide a more physical basis for uncertainty estimates than the very effective but ad hoc methods being used at present. Our discussions identified several issues that will arise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In 2005, the ECMWF held a workshop on stochastic parameterisation, at which the convection was seen as being a key issue. That much is clear from the working group reports and particularly the statement from working group 1 that “it is clear that a stochastic convection scheme is desirable”. The present note aims to consider our current status in comparison with some of the issues raised and hopes expressed in that working group report.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

For a long time, it has been believed that atmospheric absorption of radiation within wavelength regions of relatively high infrared transmittance (so-called ‘windows’) was dominated by the water vapour self-continuum, that is, spectrally smooth absorption caused by H2O−H2O pair interaction. Absorption due to the foreign continuum (i.e. caused mostly by H2O−N2 bimolecular absorption in the Earth's atmosphere) was considered to be negligible in the windows. We report new retrievals of the water vapour foreign continuum from high-resolution laboratory measurements at temperatures between 350 and 430 K in four near-infrared windows between 1.1 and 5 μm (9000–2000 cm−1). Our results indicate that the foreign continuum in these windows has a very weak temperature dependence and is typically between one and two orders of magnitude stronger than that given in representations of the continuum currently used in many climate and weather prediction models. This indicates that absorption owing to the foreign continuum may be comparable to the self-continuum under atmospheric conditions in the investigated windows. The calculated global-average clear-sky atmospheric absorption of solar radiation is increased by approximately 0.46 W m−2 (or 0.6% of the total clear-sky absorption) by using these new measurements when compared with calculations applying the widely used MTCKD (Mlawer–Tobin–Clough–Kneizys–Davies) foreign-continuum model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The surface drag force produced by trapped lee waves and upward propagating waves in non-hydrostatic stratified flow over a mountain ridge is explicitly calculated using linear theory for a two-layer atmosphere with piecewise-constant static stability and wind speed profiles. The behaviour of the drag normalized by its hydrostatic single-layer reference value is investigated as a function of the ratio of the Scorer parameters in the two layers l_2/l_1 and of the corresponding dimensionless interface height l_1 H, for selected values of the dimensionless ridge width l_1 a and ratio of wind speeds in the two layers. When l_2/l_1 → 1, the propagating wave drag approaches 1 in approximately hydrostatic conditions, and the trapped lee wave drag vanishes. As l_2/l_1 decreases, the propagating wave drag progressively displays an oscillatory behaviour with l_1 H, with maxima of increasing magnitude due to constructive interference of reflected waves in the lower layer. The trapped lee wave drag shows localized maxima associated with each resonant trapped lee wave mode, occurring for small l_2/l_1 and slightly higher values of l_1 H than the propagating wave drag maxima. As l1a decreases, i.e. the flow becomes more non-hydrostatic, the propagating wave drag decreases and the regions of non-zero trapped lee wave drag extend to higher l_2/l_1. These results are confirmed by numerical simulations for l_2/l_1 = 0.2. In parameter ranges of meteorological relevance, the trapped lee wave drag may have a magnitude comparable to that of propagating wave drag, and be larger than the reference single-layer drag. This may have implications for drag parametrization in global climate and weather-prediction models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.