979 resultados para Asymptotic Mean Squared Errors
Resumo:
Simulations of the last 500 yr carried out using the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3) with anthropogenic and natural (solar and volcanic) forcings have been analyzed. Global-mean surface temperature change during the twentieth century is well reproduced. Simulated contributions to global-mean sea level rise during recent decades due to thermal expansion (the largest term) and to mass loss from glaciers and ice caps agree within uncertainties with observational estimates of these terms, but their sum falls short of the observed rate of sea level rise. This discrepancy has been discussed by previous authors; a completely satisfactory explanation of twentieth-century sea level rise is lacking. The model suggests that the apparent onset of sea level rise and glacier retreat during the first part of the nineteenth century was due to natural forcing. The rate of sea level rise was larger during the twentieth century than during the previous centuries because of anthropogenic forcing, but decreasing natural forcing during the second half of the twentieth century tended to offset the anthropogenic acceleration in the rate. Volcanic eruptions cause rapid falls in sea level, followed by recovery over several decades. The model shows substantially less decadal variability in sea level and its thermal expansion component than twentieth-century observations indicate, either because it does not generate sufficient ocean internal variability, or because the observational analyses overestimate the variability.
Resumo:
This paper investigates the impact of aerosol forcing uncertainty on the robustness of estimates of the twentieth-century warming attributable to anthropogenic greenhouse gas emissions. Attribution analyses on three coupled climate models with very different sensitivities and aerosol forcing are carried out. The Third Hadley Centre Coupled Ocean - Atmosphere GCM (HadCM3), Parallel Climate Model (PCM), and GFDL R30 models all provide good simulations of twentieth-century global mean temperature changes when they include both anthropogenic and natural forcings. Such good agreement could result from a fortuitous cancellation of errors, for example, by balancing too much ( or too little) greenhouse warming by too much ( or too little) aerosol cooling. Despite a very large uncertainty for estimates of the possible range of sulfate aerosol forcing obtained from measurement campaigns, results show that the spatial and temporal nature of observed twentieth-century temperature change constrains the component of past warming attributable to anthropogenic greenhouse gases to be significantly greater ( at the 5% level) than the observed warming over the twentieth century. The cooling effects of aerosols are detected in all three models. Both spatial and temporal aspects of observed temperature change are responsible for constraining the relative roles of greenhouse warming and sulfate cooling over the twentieth century. This is because there are distinctive temporal structures in differential warming rates between the hemispheres, between land and ocean, and between mid- and low latitudes. As a result, consistent estimates of warming attributable to greenhouse gas emissions are obtained from all three models, and predictions are relatively robust to the use of more or less sensitive models. The transient climate response following a 1% yr(-1) increase in CO2 is estimated to lie between 2.2 and 4 K century(-1) (5-95 percentiles).
Resumo:
The performance of boreal winter forecasts made with the European Centre for Medium-Range Weather Forecasts (ECMWF) System 11 Seasonal Forecasting System is investigated through analyses of ensemble hindcasts for the period 1987-2001. The predictability, or signal-to-noise ratio, associated with the forecasts, and the forecast skill are examined. On average, forecasts of 500 hPa geopotential height (GPH) have skill in most of the Tropics and in a few regions of the extratropics. There is broad, but not perfect, agreement between regions of high predictability and regions of high skill. However, model errors are also identified, in particular regions where the forecast ensemble spread appears too small. For individual winters the information provided by t-values, a simple measure of the forecast signal-to-noise ratio, is investigated. For 2 m surface air temperature (T2m), highest t-values are found in the Tropics but there is considerable interannual variability, and in the tropical Atlantic and Indian basins this variability is not directly tied to the El Nino Southern Oscillation. For GPH there is also large interannual variability in t-values, but these variations cannot easily be predicted from the strength of the tropical sea-surface-temperature anomalies. It is argued that the t-values for 500 hPa GPH can give valuable insight into the oceanic forcing of the atmosphere that generates predictable signals in the model. Consequently, t-values may be a useful tool for understanding, at a mechanistic level, forecast successes and failures. Lastly, the extent to which t-values are useful as a predictor of forecast skill is investigated. For T2m, t-values provide a useful predictor of forecast skill in both the Tropics and extratropics. Except in the equatorial east Pacific, most of the information in t-values is associated with interannual variability of the ensemble-mean forecast rather than interannual variability of the ensemble spread. For GPH, however, t-values provide a useful predictor of forecast skill only in the tropical Pacific region.
Resumo:
Using the Met Office large-eddy model (LEM) we simulate a mixed-phase altocumulus cloud that was observed from Chilbolton in southern England by a 94 GHz Doppler radar, a 905 nm lidar, a dual-wavelength microwave radiometer and also by four radiosondes. It is important to test and evaluate such simulations with observations, since there are significant differences between results from different cloud-resolving models for ice clouds. Simulating the Doppler radar and lidar data within the LEM allows us to compare observed and modelled quantities directly, and allows us to explore the relationships between observed and unobserved variables. For general-circulation models, which currently tend to give poor representations of mixed-phase clouds, the case shows the importance of using: (i) separate prognostic ice and liquid water, (ii) a vertical resolution that captures the thin layers of liquid water, and (iii) an accurate representation the subgrid vertical velocities that allow liquid water to form. It is shown that large-scale ascents and descents are significant for this case, and so the horizontally averaged LEM profiles are relaxed towards observed profiles to account for these. The LEM simulation then gives a reasonable. cloud, with an ice-water path approximately two thirds of that observed, with liquid water at the cloud top, as observed. However, the liquid-water cells that form in the updraughts at cloud top in the LEM have liquid-water paths (LWPs) up to half those observed, and there are too few cells, giving a mean LWP five to ten times smaller than observed. In reality, ice nucleation and fallout may deplete ice-nuclei concentrations at the cloud top, allowing more liquid water to form there, but this process is not represented in the model. Decreasing the heterogeneous nucleation rate in the LEM increased the LWP, which supports this hypothesis. The LEM captures the increase in the standard deviation in Doppler velocities (and so vertical winds) with height, but values are 1.5 to 4 times smaller than observed (although values are larger in an unforced model run, this only increases the modelled LWP by a factor of approximately two). The LEM data show that, for values larger than approximately 12 cm s(-1), the standard deviation in Doppler velocities provides an almost unbiased estimate of the standard deviation in vertical winds, but provides an overestimate for smaller values. Time-smoothing the observed Doppler velocities and modelled mass-squared-weighted fallspeeds shows that observed fallspeeds are approximately two-thirds of the modelled values. Decreasing the modelled fallspeeds to those observed increases the modelled IWC, giving an IWP 1.6 times that observed.
Resumo:
Direct numerical simulations of turbulent flow over regular arrays of urban-like, cubical obstacles are reported. Results are analysed in terms of a formal spatial averaging procedure to enable interpretation of the flow within the arrays as a canopy flow, and of the flow above as a rough wall boundary layer. Spatial averages of the mean velocity, turbulent stresses and pressure drag are computed. The statistics compare very well with data from wind-tunnel experiments. Within the arrays the time-averaged flow structure gives rise to significant 'dispersive stress' whereas above the Reynolds stress dominates. The mean flow structure and turbulence statistics depend significantly on the layout of the cubes. Unsteady effects are important, especially in the lower canopy layer where turbulent fluctuations dominate over the mean flow.
Resumo:
The modelled El Nino-mean state-seasonal cycle interactions in 23 coupled ocean-atmosphere GCMs, including the recent IPCC AR4 models, are assessed and compared to observations and theory. The models show a clear improvement over previous generations in simulating the tropical Pacific climatology. Systematic biases still include too strong mean and seasonal cycle of trade winds. El Nino amplitude is shown to be an inverse function of the mean trade winds in agreement with the observed shift of 1976 and with theoretical studies. El Nino amplitude is further shown to be an inverse function of the relative strength of the seasonal cycle. When most of the energy is within the seasonal cycle, little is left for inter-annual signals and vice versa. An interannual coupling strength (ICS) is defined and its relation with the modelled El Nino frequency is compared to that predicted by theoretical models. An assessment of the modelled El Nino in term of SST mode (S-mode) or thermocline mode (T-mode) shows that most models are locked into a S-mode and that only a few models exhibit a hybrid mode, like in observations. It is concluded that several basic El Nino-mean state-seasonal cycle relationships proposed by either theory or analysis of observations seem to be reproduced by CGCMs. This is especially true for the amplitude of El Nino and is less clear for its frequency. Most of these relationships, first established for the pre-industrial control simulations, hold for the double and quadruple CO2 stabilized scenarios. The models that exhibit the largest El Nino amplitude change in these greenhouse gas (GHG) increase scenarios are those that exhibit a mode change towards a T-mode (either from S-mode to hybrid or hybrid to T-mode). This follows the observed 1976 climate shift in the tropical Pacific, and supports the-still debated-finding of studies that associated this shift to increased GHGs. In many respects, these models are also among those that best simulate the tropical Pacific climatology (ECHAM5/MPI-OM, GFDL-CM2.0, GFDL-CM2.1, MRI-CGM2.3.2, UKMO-HadCM3). Results from this large subset of models suggest the likelihood of increased El Nino amplitude in a warmer climate, though there is considerable spread of El Nino behaviour among the models and the changes in the subsurface thermocline properties that may be important for El Nino change could not be assessed. There are no clear indications of an El Nino frequency change with increased GHG.
Resumo:
The combination of radar and lidar in space offers the unique potential to retrieve vertical profiles of ice water content and particle size globally, and two algorithms developed recently claim to have overcome the principal difficulty with this approach-that of correcting the lidar signal for extinction. In this paper "blind tests" of these algorithms are carried out, using realistic 94-GHz radar and 355-nm lidar backscatter profiles simulated from aircraft-measured size spectra, and including the effects of molecular scattering, multiple scattering, and instrument noise. Radiation calculations are performed on the true and retrieved microphysical profiles to estimate the accuracy with which radiative flux profiles could be inferred remotely. It is found that the visible extinction profile can be retrieved independent of assumptions on the nature of the size distribution, the habit of the particles, the mean extinction-to-backscatter ratio, or errors in instrument calibration. Local errors in retrieved extinction can occur in proportion to local fluctuations in the extinction-to-backscatter ratio, but down to 400 m above the height of the lowest lidar return, optical depth is typically retrieved to better than 0.2. Retrieval uncertainties are greater at the far end of the profile, and errors in total optical depth can exceed 1, which changes the shortwave radiative effect of the cloud by around 20%. Longwave fluxes are much less sensitive to errors in total optical depth, and may generally be calculated to better than 2 W m(-2) throughout the profile. It is important for retrieval algorithms to account for the effects of lidar multiple scattering, because if this is neglected, then optical depth is underestimated by approximately 35%, resulting in cloud radiative effects being underestimated by around 30% in the shortwave and 15% in the longwave. Unlike the extinction coefficient, the inferred ice water content and particle size can vary by 30%, depending on the assumed mass-size relationship (a problem common to all remote retrieval algorithms). However, radiative fluxes are almost completely determined by the extinction profile, and if this is correct, then errors in these other parameters have only a small effect in the shortwave (around 6%, compared to that of clear sky) and a negligible effect in the longwave.