527 resultados para ensembles
Resumo:
In this paper, the predictability of climate arising from ocean heat content (OHC) anomalies is investigated in the HadCM3 coupled atmosphere–ocean model. An ensemble of simulations of the twentieth century are used to provide initial conditions for a case study. The case study consists of two ensembles started from initial conditions with large differences in regional OHC in the North Atlantic, the Southern Ocean and parts of the West Pacific. Surface temperatures and precipitation are on average not predictable beyond seasonal time scales, but for certain initial conditions there may be longer predictability. It is shown that, for the case study examined here, some aspects of tropical precipitation, European surface temperatures and North Atlantic sea-level pressure are potentially predictable 2 years ahead. Predictability also exists in the other case studies, but the climate variables and regions, which are potentially predictable, differ. This work was done as part of the Grid for Coupled Ensemble Prediction (GCEP) eScience project.
Resumo:
We unfold a profound relationship between the dynamics of finite-size perturbations in spatially extended chaotic systems and the universality class of Kardar-Parisi-Zhang (KPZ). We show how this relationship can be exploited to obtain a complete theoretical description of the bred vectors dynamics. The existence of characteristic length/time scales, the spatial extent of spatial correlations and how to time it, and the role of the breeding amplitude are all analyzed in the light of our theory. Implications to weather forecasting based on ensembles of initial conditions are also discussed.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.
Resumo:
In addition to projected increases in global mean sea level over the 21st century, model simulations suggest there will also be changes in the regional distribution of sea level relative to the global mean. There is a considerable spread in the projected patterns of these changes by current models, as shown by the recent Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment (AR4). This spread has not reduced from that given by the Third Assessment models. Comparison with projections by ensembles of models based on a single structure supports an earlier suggestion that models of similar formulation give more similar patterns of sea level change. Analysing an AR4 ensemble of model projections under a business-as-usual scenario shows that steric changes (associated with subsurface ocean density changes) largely dominate the sea level pattern changes. The relative importance of subsurface temperature or salinity changes in contributing to this differs from region to region and, to an extent, from model-to-model. In general, thermosteric changes give the spatial variations in the Southern Ocean, halosteric changes dominate in the Arctic and strong compensation between thermosteric and halosteric changes characterises the Atlantic. The magnitude of sea level and component changes in the Atlantic appear to be linked to the amount of Atlantic meridional overturning circulation (MOC) weakening. When the MOC weakening is substantial, the Atlantic thermosteric patterns of change arise from a dominant role of ocean advective heat flux changes.
Resumo:
The new HadKPP atmosphere–ocean coupled model is described and then used to determine the effects of sub-daily air–sea coupling and fine near-surface ocean vertical resolution on the representation of the Northern Hemisphere summer intra-seasonal oscillation. HadKPP comprises the Hadley Centre atmospheric model coupled to the K Profile Parameterization ocean-boundary-layer model. Four 30-member ensembles were performed that varied in oceanic vertical resolution between 1 m and 10 m and in coupling frequency between 3 h and 24 h. The 10 m, 24 h ensemble exhibited roughly 60% of the observed 30–50 day variability in sea-surface temperatures and rainfall and very weak northward propagation. Enhancing either only the vertical resolution or only the coupling frequency produced modest improvements in variability and only a standing intra-seasonal oscillation. Only the 1 m, 3 h configuration generated organized, northward-propagating convection similar to observations. Sub-daily surface forcing produced stronger upper-ocean temperature anomalies in quadrature with anomalous convection, which likely affected lower-atmospheric stability ahead of the convection, causing propagation. Well-resolved air–sea coupling did not improve the eastward propagation of the boreal summer intra-seasonal oscillation in this model. Upper-ocean vertical mixing and diurnal variability in coupled models must be improved to accurately resolve and simulate tropical sub-seasonal variability. In HadKPP, the mere presence of air–sea coupling was not sufficient to generate an intra-seasonal oscillation resembling observations.
Resumo:
Using the recently-developed mean–variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, an analysis is presented of the spatiotemporal dynamics of their perturbations, showing how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. In particular, a divide is seen between ensembles based on singular vectors or empirical orthogonal functions, and those based on bred vector, Ensemble Transform with Rescaling or Ensemble Kalman Filter techniques. Consideration is also given to the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. Finally, the use of the MVL technique to assist in selecting models for inclusion in a multi-model ensemble is discussed, and an experiment suggested to test its potential in this context.
Resumo:
A simple and coherent framework for partitioning uncertainty in multi-model climate ensembles is presented. The analysis of variance (ANOVA) is used to decompose a measure of total variation additively into scenario uncertainty, model uncertainty and internal variability. This approach requires fewer assumptions than existing methods and can be easily used to quantify uncertainty related to model-scenario interaction - the contribution to model uncertainty arising from the variation across scenarios of model deviations from the ensemble mean. Uncertainty in global mean surface air temperature is quantified as a function of lead time for a subset of the Coupled Model Intercomparison Project phase 3 ensemble and results largely agree with those published by other authors: scenario uncertainty dominates beyond 2050 and internal variability remains approximately constant over the 21st century. Both elements of model uncertainty, due to scenario-independent and scenario-dependent deviations from the ensemble mean, are found to increase with time. Estimates of model deviations that arise as by-products of the framework reveal significant differences between models that could lead to a deeper understanding of the sources of uncertainty in multi-model ensembles. For example, three models are shown diverging pattern over the 21st century, while another model exhibits an unusually large variation among its scenario-dependent deviations.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
For many climate forcings the dominant response of the extratropical circulation is a latitudinal shift of the tropospheric mid-latitude jets. The magnitude of this response appears to depend on climatological jet latitude in general circulation models (GCMs): lower latitude jets exhibit a larger shift. The reason for this latitude dependence is investigated for a particular forcing, heating of the equatorial stratosphere, which shifts the jet poleward. Spin-up ensembles with a simplified GCM are used to examine the evolution of the response for five different jet structures. These differ in the latitude of the eddy-driven jet, but have similar sub-tropical zonal winds. It is found that lower latitude jets exhibit a larger response due to stronger tropospheric eddy-mean flow feedbacks. A dominant feedback responsible for enhancing the poleward shift is an enhanced equatorward refraction of the eddies, resulting in an increased momentum flux, poleward of the low latitude critical line. The sensitivity of feedback strength to jet structure is associated with differences in the coherence of this behaviour across the spectrum of eddy phase speeds. In the configurations used, the higher latitude jets have a wider range of critical latitude locations. This reduces the coherence of the momentum flux anomalies associated with different phase speeds, with low phase speeds opposing the effect of high phase speeds. This suggests that, for a given sub-tropical zonal wind strength, the latitude of the eddy driven jet affects the feedback through its influence on the width of the region of westerly winds and the range of critical latitudes on the equatorward flank of the jet.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
Under increasing greenhouse gas concentrations, ocean heat uptake moderates the rate of climate change, and thermal expansion makes a substantial contribution to sea level rise. In this paper we quantify the differences in projections among atmosphere-ocean general circulation models of the Coupled Model Intercomparison Project in terms of transient climate response, ocean heat uptake efficiency and expansion efficiency of heat. The CMIP3 and CMIP5 ensembles have statistically indistinguishable distributions in these parameters. The ocean heat uptake efficiency varies by a factor of two across the models, explaining about 50% of the spread in ocean heat uptake in CMIP5 models with CO2 increasing at 1%/year. It correlates with the ocean global-mean vertical profiles both of temperature and of temperature change, and comparison with observations suggests the models may overestimate ocean heat uptake and underestimate surface warming, because their stratification is too weak. The models agree on the location of maxima of shallow ocean heat uptake (above 700 m) in the Southern Ocean and the North Atlantic, and on deep ocean heat uptake (below 2000 m) in areas of the Southern Ocean, in some places amounting to 40% of the top-to-bottom integral in the CMIP3 SRES A1B scenario. The Southern Ocean dominates global ocean heat uptake; consequently the eddy-induced thickness diffusivity parameter, which is particularly influential in the Southern Ocean, correlates with the ocean heat uptake efficiency. The thermal expansion produced by ocean heat uptake is 0.12 m YJ−1, with an uncertainty of about 10% (1 YJ = 1024 J).
Resumo:
A set of random variables is exchangeable if its joint distribution function is invariant under permutation of the arguments. The concept of exchangeability is discussed, with a view towards potential application in evaluating ensemble forecasts. It is argued that the paradigm of ensembles being an independent draw from an underlying distribution function is probably too narrow; allowing ensemble members to be merely exchangeable might be a more versatile model. The question is discussed whether established methods of ensemble evaluation need alteration under this model, with reliability being given particular attention. It turns out that the standard methodology of rank histograms can still be applied. As a first application of the exchangeability concept, it is shown that the method of minimum spanning trees to evaluate the reliability of high dimensional ensembles is mathematically sound.
Resumo:
In a recent paper, Mason et al. propose a reliability test of ensemble forecasts for a continuous, scalar verification. As noted in the paper, the test relies on a very specific interpretation of ensembles, namely, that the ensemble members represent quantiles of some underlying distribution. This quantile interpretation is not the only interpretation of ensembles, another popular one being the Monte Carlo interpretation. Mason et al. suggest estimating the quantiles in this situation; however, this approach is fundamentally flawed. Errors in the quantile estimates are not independent of the exceedance events, and consequently the conditional exceedance probabilities (CEP) curves are not constant, which is a fundamental assumption of the test. The test would reject reliable forecasts with probability much higher than the test size.
Resumo:
An ensemble forecast is a collection of runs of a numerical dynamical model, initialized with perturbed initial conditions. In modern weather prediction for example, ensembles are used to retrieve probabilistic information about future weather conditions. In this contribution, we are concerned with ensemble forecasts of a scalar quantity (say, the temperature at a specific location). We consider the event that the verification is smaller than the smallest, or larger than the largest ensemble member. We call these events outliers. If a K-member ensemble accurately reflected the variability of the verification, outliers should occur with a base rate of 2/(K + 1). In operational forecast ensembles though, this frequency is often found to be higher. We study the predictability of outliers and find that, exploiting information available from the ensemble, forecast probabilities for outlier events can be calculated which are more skilful than the unconditional base rate. We prove this analytically for statistically consistent forecast ensembles. Further, the analytical results are compared to the predictability of outliers in an operational forecast ensemble by means of model output statistics. We find the analytical and empirical results to agree both qualitatively and quantitatively.