79 resultados para Operational variables
em CentAUR: Central Archive University of Reading - UK
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
A reconstruction of the Atlantic Meridional Overturning Circulation (MOC) for the period 1959–2006 has been derived from the ECMWF operational ocean reanalysis. The reconstruction shows a wide range of time-variability, including a downward trend. At 26N, both the MOC intensity and changes in its vertical structure are in good agreement with previous estimates based on trans-Atlantic surveys. At 50N, the MOC and strength of the subpolar gyre are correlated at interannual time scales, but show opposite secular trends. Heat transport variability is highly correlated with the MOC but shows a smaller trend due to the warming of the upper ocean, which partially compensates for the weakening of the circulation. Results from sensitivity experiments show that although the time-varying upper boundary forcing provides useful MOC information, the sequential assimilation of ocean data further improves the MOC estimation by increasing both the mean and the time variability.
Resumo:
As part of its Data User Element programme, the European Space Agency funded the GlobMODEL project which aimed at investigating the scientific, technical, and organizational issues associated with the use and exploitation of remotely-sensed observations, particularly from new sounders. A pilot study was performed as a "demonstrator" of the GlobMODEL idea, based on the use of new data, with a strong European heritage, not yet assimilated operationally. Two parallel assimilation experiments were performed, using either total column ozone or ozone profiles retrieved at the Royal Netherlands Meteorological Institute (KNMI) from the Ozone Monitoring Instrument (OMI). In both cases, the impact of assimilating OMI data in addition to the total ozone columns from the SCanning Imaging Absorption spectroMeter for Atmospheric CartograpHY (SCIAMACHY) on the European Centre for Medium Range Weather Forecasts (ECMWF) ozone analyses was assessed by means of independent measurements. We found that the impact of OMI total columns is mainly limited to the region between 20 and 80 hPa, and is particularly important at high latitudes in the Southern hemisphere where the stratospheric ozone transport and chemical depletion are generally difficult to model with accuracy. Furthermore, the assimilation experiments carried out in this work suggest that OMI DOAS (Differential Optical Absorption Spectroscopy) total ozone columns are on average larger than SCIAMACHY total columns by up to 3 DU, while OMI total columns derived from OMI ozone profiles are on average about 8 DU larger than SCIAMACHY total columns. At the same time, the demonstrator brought to light a number of issues related to the assimilation of atmospheric composition profiles, such as the shortcomings arising when the vertical resolution of the instrument is not properly accounted for in the assimilation. The GlobMODEL demonstrator accelerated scientific and operational utilization of new observations and its results - prompted ECMWF to start the operational assimilation of OMI total column ozone data.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.