492 resultados para 1477
Resumo:
The subgrid-scale spatial variability in cloud water content can be described by a parameter f called the fractional standard deviation. This is equal to the standard deviation of the cloud water content divided by the mean. This parameter is an input to schemes that calculate the impact of subgrid-scale cloud inhomogeneity on gridbox-mean radiative fluxes and microphysical process rates. A new regime-dependent parametrization of the spatial variability of cloud water content is derived from CloudSat observations of ice clouds. In addition to the dependencies on horizontal and vertical resolution and cloud fraction included in previous parametrizations, the new parametrization includes an explicit dependence on cloud type. The new parametrization is then implemented in the Global Atmosphere 6 (GA6) configuration of the Met Office Unified Model and used to model the effects of subgrid variability of both ice and liquid water content on radiative fluxes and autoconversion and accretion rates in three 20-year atmosphere-only climate simulations. These simulations show the impact of the new regime-dependent parametrization on diagnostic radiation calculations, interactive radiation calculations and both interactive radiation calculations and in a new warm microphysics scheme. The control simulation uses a globally constant f value of 0.75 to model the effect of cloud water content variability on radiative fluxes. The use of the new regime-dependent parametrization in the model results in a global mean which is higher than the control's fixed value and a global distribution of f which is closer to CloudSat observations. When the new regime-dependent parametrization is used in radiative transfer calculations only, the magnitudes of short-wave and long-wave top of atmosphere cloud radiative forcing are reduced, increasing the existing global mean biases in the control. When also applied in a new warm microphysics scheme, the short-wave global mean bias is reduced.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
Spatial variability of liquid cloud water content and rainwater content is analysed from three different observational platforms: in situ measurements from research aircraft, land-based remote sensing techniques using radar and lidar, and spaceborne remote sensing from CloudSat. The variance is found to increase with spatial scale, but also depends strongly on the cloud or rain fraction regime, with overcast regions containing less variability than broken cloud fields. This variability is shown to lead to large biases, up to a factor of 4, in both the autoconversion and accretion rates estimated at a model grid scale of ≈40 km by a typical microphysical parametrization using in-cloud mean values. A parametrization for the subgrid variability of liquid cloud and rainwater content is developed, based on the observations, which varies with both the grid scale and cloud or rain fraction, and is applicable for all model grid scales. It is then shown that if this parametrization of the variability is analytically incorporated into the autoconversion and accretion rate calculations, the bias is significantly reduced.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
The Met Office 1km radar-derived precipitation-rate composite over 8 years (2006–2013) is examined to evaluate whether it provides an accurate representation of annual-average precipitation over Great Britain and Ireland over long periods of time. The annual-average precipitation from the radar composite is comparable with gauge measurements, with an average error of +23mmyr−1 over Great Britain and Ireland, +29mmyr−1 (3%) over the United Kingdom and –781mmyr−1 (46%) over the Republic of Ireland. The radar-derived precipitation composite is useful over the United Kingdom including Northern Ireland, but not accurate over the Republic of Ireland, particularly in the south.
Resumo:
In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.
Resumo:
This study presents an evaluation of the size and strength of convective updraughts in high-resolution simulations by the UK Met Office Unified Model (UM). Updraught velocities have been estimated from range–height indicator (RHI) Doppler velocity measurements using the Chilbolton advanced meteorological radar, as part of the Dynamical and Microphysical Evolution of Convective Storms (DYMECS) project. Based on mass continuity and the vertical integration of the observed radial convergence, vertical velocities tend to be underestimated for convective clouds due to the undetected cross-radial convergence. Velocity fields from the UM at a resolution corresponding to the radar observations are used to scale such estimates to mitigate the inherent biases. The analysis of more than 100 observed and simulated storms indicates that the horizontal scale of updraughts in simulations tend to decrease with grid length; the 200 m grid length agreed most closely with the observations. Typical updraught mass fluxes in the 500 m grid length simulations were up to an order of magnitude greater than observed, and greater still in the 1.5 km grid length simulations. The effect of increasing the mixing length in the sub-grid turbulence scheme depends on the grid length. For the 1.5 km simulations, updraughts were weakened though their horizontal scale remained largely unchanged. Progressively more so for the sub-kilometre grid lengths, updraughts were broadened and intensified; horizontal scale was now determined by the mixing length rather than the grid length. In general, simulated updraughts were found to weaken too quickly with height. The findings were supported by the analysis of the widths of reflectivity patterns in both the simulations and observations.
Resumo:
Weather and climate model simulations of the West African Monsoon (WAM) have generally poor representation of the rainfall distribution and monsoon circulation because key processes, such as clouds and convection, are poorly characterized. The vertical distribution of cloud and precipitation during the WAM are evaluated in Met Office Unified Model simulations against CloudSat observations. Simulations were run at 40-km and 12-km horizontal grid length using a convection parameterization scheme and at 12-km, 4-km, and 1.5-km grid length with the convection scheme effectively switched off, to study the impact of model resolution and convection parameterization scheme on the organisation of tropical convection. Radar reflectivity is forward-modelled from the model cloud fields using the CloudSat simulator to present a like-with-like comparison with the CloudSat radar observations. The representation of cloud and precipitation at 12-km horizontal grid length improves dramatically when the convection parameterization is switched off, primarily because of a reduction in daytime (moist) convection. Further improvement is obtained when reducing model grid length to 4 km or 1.5 km, especially in the representation of thin anvil and mid-level cloud, but three issues remain in all model configurations. Firstly, all simulations underestimate the fraction of anvils with cloud top height above 12 km, which can be attributed to too low ice water contents in the model compared to satellite retrievals. Secondly, the model consistently detrains mid-level cloud too close to the freezing level, compared to higher altitudes in CloudSat observations. Finally, there is too much low-level cloud cover in all simulations and this bias was not improved when adjusting the rainfall parameters in the microphysics scheme. To improve model simulations of the WAM, more detailed and in-situ observations of the dynamics and microphysics targeting these non-precipitating cloud types are required.
Resumo:
This study has investigated serial (temporal) clustering of extra-tropical cyclones simulated by 17 climate models that participated in CMIP5. Clustering was estimated by calculating the dispersion (ratio of variance to mean) of 30 December-February counts of Atlantic storm tracks passing nearby each grid point. Results from single historical simulations of 1975-2005 were compared to those from historical ERA40 reanalyses from 1958-2001 ERA40 and single future model projections of 2069-2099 under the RCP4.5 climate change scenario. Models were generally able to capture the broad features in reanalyses reported previously: underdispersion/regularity (i.e. variance less than mean) in the western core of the Atlantic storm track surrounded by overdispersion/clustering (i.e. variance greater than mean) to the north and south and over western Europe. Regression of counts onto North Atlantic Oscillation (NAO) indices revealed that much of the overdispersion in the historical reanalyses and model simulations can be accounted for by NAO variability. Future changes in dispersion were generally found to be small and not consistent across models. The overdispersion statistic, for any 30 year sample, is prone to large amounts of sampling uncertainty that obscures the climate change signal. For example, the projected increase in dispersion for storm counts near London in the CNRMCM5 model is 0.1 compared to a standard deviation of 0.25. Projected changes in the mean and variance of NAO are insufficient to create changes in overdispersion that are discernible above natural sampling variations.
Resumo:
Recent work has shown that both the amplitude of upper-level Rossby waves and the tropopause sharpness decrease with forecast lead time for several days in some operational weather forecast systems. In this contribution, the evolution of error growth in a case study of this forecast error type is diagnosed through analysis of operational forecasts and hindcast simulations. Potential vorticity (PV) on the 320-K isentropic surface is used to diagnose Rossby waves. The Rossby-wave forecast error in the operational ECMWF high-resolution forecast is shown to be associated with errors in the forecast of a warm conveyor belt (WCB) through trajectory analysis and an error metric for WCB outflows. The WCB forecast error is characterised by an overestimation of WCB amplitude, a location of the WCB outflow regions that is too far to the southeast, and a resulting underestimation of the magnitude of the negative PV anomaly in the outflow. Essentially the same forecast error development also occurred in all members of the ECMWF Ensemble Prediction System and the Met Office MOGREPS-15 suggesting that in this case model error made an important contribution to the development of forecast error in addition to initial condition error. Exploiting this forecast error robustness, a comparison was performed between the realised flow evolution, proxied by a sequence of short-range simulations, and a contemporaneous forecast. Both the proxy to the realised flow and the contemporaneous forecast a were produced with the Met Office Unified Model enhanced with tracers of diabatic processes modifying potential temperature and PV. Clear differences were found in the way potential temperature and PV are modified in the WCB between proxy and forecast. These results demonstrate that differences in potential temperature and PV modification in the WCB can be responsible for forecast errors in Rossby waves.
Resumo:
Observations have been obtained within an intense (precipitation rates > 50 mm h−1 ) narrow cold-frontal rainband (NCFR) embedded within a broader region of stratiform precipitation. In situ data were obtained from an aircraft which flew near a steerable dual-polarisation Doppler radar. The observations were obtained to characterise the microphysical properties of cold frontal clouds, with an emphasis on ice and precipitation formation and development. Primary ice nucleation near cloud top (−55◦ C) appeared to be enhanced by convective features. However, ice multiplication led to the largest ice particle number concentrations being observed at relatively high temperatures (> −10◦ C). The multiplication process (most likely rime splintering) occurs when stratiform precipitation interacts with supercooled water generated in the NCFR. Graupel was notably absent in the data obtained. Ice multiplication processes are known to have a strong impact in glaciating isolated convective clouds, but have rarely been studied within larger organised convective systems such as NCFRs. Secondary ice particles will impact on precipitation formation and cloud dynamics due to their relatively small size and high number density. Further modelling studies are required to quantify the effects of rime splintering on precipitation and dynamics in frontal rainbands. Available parametrizations used to diagnose the particle size distributions do not account for the influence of ice multiplication. This deficiency in parametrizations is likely to be important in some cases for modelling the evolution of cloud systems and the precipitation formation. Ice multiplication has significant impact on artefact removal from in situ particle imaging probes.
Resumo:
Dual-polarisation radar measurements provide valuable information about the shapes and orientations of atmospheric ice particles. For quantitative interpretation of these data in the Rayleigh regime, common practice is to approximate the true ice crystal shape with that of a spheroid. Calculations using the discrete dipole approximation for a wide range of crystal aspect ratios demonstrate that approximating hexagonal plates as spheroids leads to significant errors in the predicted differential reflectivity, by as much as 1.5 dB. An empirical modification of the shape factors in Gans's spheroid theory was made using the numerical data. The resulting simple expressions, like Gans's theory, can be applied to crystals in any desired orientation, illuminated by an arbitrarily polarised wave, but are much more accurate for hexagonal particles. Calculations of the scattering from more complex branched and dendritic crystals indicate that these may be accurately modelled using the new expression, but with a reduced permittivity dependent on the volume of ice relative to an enclosing hexagonal prism.
Resumo:
We use both Granger-causality and instrumental variables (IV) methods to examine the impact of index fund positions on price returns for the main US grains and oilseed futures markets. Our analysis supports earlier conclusions that Granger-causal impacts are generally not discernible. However, market microstructure theory suggests trading impacts should be instantaneous. IV-based tests for contemporaneous causality provide stronger evidence of price impact. We find even stronger evidence that changes in index positions can help predict future changes in aggregate commodity price indices. This result suggests that changes in index investment are in part driven by information which predicts commodity price changes over the coming months.
Resumo:
Methods to explicitly represent uncertainties in weather and climate models have been developed and refined over the past decade, and have reduced biases and improved forecast skill when implemented in the atmospheric component of models. These methods have not yet been applied to the land surface component of models. Since the land surface is strongly coupled to the atmospheric state at certain times and in certain places (such as the European summer of 2003), improvements in the representation of land surface uncertainty may potentially lead to improvements in atmospheric forecasts for such events. Here we analyse seasonal retrospective forecasts for 1981–2012 performed with the European Centre for Medium-Range Weather Forecasts’ (ECMWF) coupled ensemble forecast model. We consider two methods of incorporating uncertainty into the land surface model (H-TESSEL): stochastic perturbation of tendencies, and static perturbation of key soil parameters. We find that the perturbed parameter approach considerably improves the forecast of extreme air temperature for summer 2003, through better representation of negative soil moisture anomalies and upward sensible heat flux. Averaged across all the reforecasts the perturbed parameter experiment shows relatively little impact on the mean bias, suggesting perturbations of at least this magnitude can be applied to the land surface without any degradation of model climate. There is also little impact on skill averaged across all reforecasts and some evidence of overdispersion for soil moisture. The stochastic tendency experiments show a large overdispersion for the soil temperature fields, indicating that the perturbation here is too strong. There is also some indication that the forecast of the 2003 warm event is improved for the stochastic experiments, however the improvement is not as large as observed for the perturbed parameter experiment.
Resumo:
Using lessons from idealised predictability experiments, we discuss some issues and perspectives on the design of operational seasonal to inter-annual Arctic sea-ice prediction systems. We first review the opportunities to use a hierarchy of different types of experiment to learn about the predictability of Arctic climate. We also examine key issues for ensemble system design, such as: measuring skill, the role of ensemble size and generation of ensemble members. When assessing the potential skill of a set of prediction experiments, using more than one metric is essential as different choices can significantly alter conclusions about the presence or lack of skill. We find that increasing both the number of hindcasts and ensemble size is important for reliably assessing the correlation and expected error in forecasts. For other metrics, such as dispersion, increasing ensemble size is most important. Probabilistic measures of skill can also provide useful information about the reliability of forecasts. In addition, various methods for generating the different ensemble members are tested. The range of techniques can produce surprisingly different ensemble spread characteristics. The lessons learnt should help inform the design of future operational prediction systems.