135 resultados para errors-in-variables model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The persistence and decay of springtime total ozone anomalies over the entire extratropics (midlatitudes plus polar regions) is analysed using results from the Canadian Middle Atmosphere Model (CMAM), a comprehensive chemistry-climate model. As in the observations, interannual anomalies established through winter and spring persist with very high correlation coefficients (above 0.8) through summer until early autumn, while decaying in amplitude as a result of photochemical relaxation in the quiescent summertime stratosphere. The persistence and decay of the ozone anomalies in CMAM agrees extremely well with observations, even in the southern hemisphere when the model is run without heterogeneous chemistry (in which case there is no ozone hole and the seasonal cycle of ozone is quite different from observations). However in a version of CMAM with strong vertical diffusion, the northern hemisphere anomalies decay far too rapidly compared to observations. This shows that ozone anomaly persistence and decay does not depend on how the springtime anomalies are created or on their magnitude, but reflects the transport and photochemical decay in the model. The seasonality of the long-term trends over the entire extratropics is found to be explained by the persistence of the interannual anomalies, as in the observations, demonstrating that summertime ozone trends reflect winter/spring trends rather than any change in summertime ozone chemistry. However this mechanism fails in the northern hemisphere midlatitudes because of the relatively large impact, compared to observations, of the CMAM polar anomalies. As in the southern hemisphere, the influence of polar ozone loss in CMAM increases the midlatitude summertime loss, leading to a relatively weak seasonal dependence of ozone loss in the Northern Hemisphere compared to the observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulations of the stratosphere from thirteen coupled chemistry-climate models (CCMs) are evaluated to provide guidance for the interpretation of ozone predictions made by the same CCMs. The focus of the evaluation is on how well the fields and processes that are important for determining the ozone distribution are represented in the simulations of the recent past. The core period of the evaluation is from 1980 to 1999 but long-term trends are compared for an extended period (1960–2004). Comparisons of polar high-latitude temperatures show that most CCMs have only small biases in the Northern Hemisphere in winter and spring, but still have cold biases in the Southern Hemisphere spring below 10 hPa. Most CCMs display the correct stratospheric response of polar temperatures to wave forcing in the Northern, but not in the Southern Hemisphere. Global long-term stratospheric temperature trends are in reasonable agreement with satellite and radiosonde observations. Comparisons of simulations of methane, mean age of air, and propagation of the annual cycle in water vapor show a wide spread in the results, indicating differences in transport. However, for around half the models there is reasonable agreement with observations. In these models the mean age of air and the water vapor tape recorder signal are generally better than reported in previous model intercomparisons. Comparisons of the water vapor and inorganic chlorine (Cly) fields also show a large intermodel spread. Differences in tropical water vapor mixing ratios in the lower stratosphere are primarily related to biases in the simulated tropical tropopause temperatures and not transport. The spread in Cly, which is largest in the polar lower stratosphere, appears to be primarily related to transport differences. In general the amplitude and phase of the annual cycle in total ozone is well simulated apart from the southern high latitudes. Most CCMs show reasonable agreement with observed total ozone trends and variability on a global scale, but a greater spread in the ozone trends in polar regions in spring, especially in the Arctic. In conclusion, despite the wide range of skills in representing different processes assessed here, there is sufficient agreement between the majority of the CCMs and the observations that some confidence can be placed in their predictions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correlations between various chemical species simulated by the Canadian Middle Atmosphere Model, a general circulation model with fully interactive chemistry, are considered in order to investigate the general conditions under which compact correlations can be expected to form. At the same time, the analysis serves to validate the model. The results are compared to previous work on this subject, both from theoretical studies and from atmospheric measurements made from space and from aircraft. The results highlight the importance of having a data set with good spatial coverage when working with correlations and provide a background against which the compactness of correlations obtained from atmospheric measurements can be confirmed. It is shown that for long-lived species, distinct correlations are found in the model in the tropics, the extratropics, and the Antarctic winter vortex. Under these conditions, sparse sampling such as arises from occultation instruments is nevertheless suitable to define a chemical correlation within each region even from a single day of measurements, provided a sufficient range of mixing ratio values is sampled. In practice, this means a large vertical extent, though the requirements are less stringent at more poleward latitudes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A statistical model is derived relating the diurnal variation of sea surface temperature (SST) to the net surface heat flux and surface wind speed from a numerical weather prediction (NWP) model. The model is derived using fluxes and winds from the European Centre for Medium-Range Weather Forecasting (ECMWF) NWP model and SSTs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). In the model, diurnal warming has a linear dependence on the net surface heat flux integrated since (approximately) dawn and an inverse quadratic dependence on the maximum of the surface wind speed in the same period. The model coefficients are found by matching, for a given integrated heat flux, the frequency distributions of the maximum wind speed and the observed warming. Diurnal cooling, where it occurs, is modelled as proportional to the integrated heat flux divided by the heat capacity of the seasonal mixed layer. The model reproduces the statistics (mean, standard deviation, and 95-percentile) of the diurnal variation of SST seen by SEVIRI and reproduces the geographical pattern of mean warming seen by the Advanced Microwave Scanning Radiometer (AMSR-E). We use the functional dependencies in the statistical model to test the behaviour of two physical model of diurnal warming that display contrasting systematic errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The observation-error covariance matrix used in data assimilation contains contributions from instrument errors, representativity errors and errors introduced by the approximated observation operator. Forward model errors arise when the observation operator does not correctly model the observations or when observations can resolve spatial scales that the model cannot. Previous work to estimate the observation-error covariance matrix for particular observing instruments has shown that it contains signifcant correlations. In particular, correlations for humidity data are more significant than those for temperature. However it is not known what proportion of these correlations can be attributed to the representativity errors. In this article we apply an existing method for calculating representativity error, previously applied to an idealised system, to NWP data. We calculate horizontal errors of representativity for temperature and humidity using data from the Met Office high-resolution UK variable resolution model. Our results show that errors of representativity are correlated and more significant for specific humidity than temperature. We also find that representativity error varies with height. This suggests that the assimilation scheme may be improved if these errors are explicitly included in a data assimilation scheme. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The global cycle of multicomponent aerosols including sulfate, black carbon (BC),organic matter (OM), mineral dust, and sea salt is simulated in the Laboratoire de Me´te´orologie Dynamique general circulation model (LMDZT GCM). The seasonal open biomass burning emissions for simulation years 2000–2001 are scaled from climatological emissions in proportion to satellite detected fire counts. The emissions of dust and sea salt are parameterized online in the model. The comparison of model-predicted monthly mean aerosol optical depth (AOD) at 500 nm with Aerosol Robotic Network (AERONET) shows good agreement with a correlation coefficient of 0.57(N = 1324) and 76% of data points falling within a factor of 2 deviation. The correlation coefficient for daily mean values drops to 0.49 (N = 23,680). The absorption AOD (ta at 670 nm) estimated in the model is poorly correlated with measurements (r = 0.27, N = 349). It is biased low by 24% as compared to AERONET. The model reproduces the prominent features in the monthly mean AOD retrievals from Moderate Resolution Imaging Spectroradiometer (MODIS). The agreement between the model and MODIS is better over source and outflow regions (i.e., within a factor of 2).There is an underestimation of the model by up to a factor of 3 to 5 over some remote oceans. The largest contribution to global annual average AOD (0.12 at 550 nm) is from sulfate (0.043 or 35%), followed by sea salt (0.027 or 23%), dust (0.026 or 22%),OM (0.021 or 17%), and BC (0.004 or 3%). The atmospheric aerosol absorption is predominantly contributed by BC and is about 3% of the total AOD. The globally and annually averaged shortwave (SW) direct aerosol radiative perturbation (DARP) in clear-sky conditions is �2.17 Wm�2 and is about a factor of 2 larger than in all-sky conditions (�1.04 Wm�2). The net DARP (SW + LW) by all aerosols is �1.46 and �0.59 Wm�2 in clear- and all-sky conditions, respectively. Use of realistic, less absorbing in SW, optical properties for dust results in negative forcing over the dust-dominated regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[1] Decadal hindcast simulations of Arctic Ocean sea ice thickness made by a modern dynamic-thermodynamic sea ice model and forced independently by both the ERA-40 and NCEP/NCAR reanalysis data sets are compared for the first time. Using comprehensive data sets of observations made between 1979 and 2001 of sea ice thickness, draft, extent, and speeds, we find that it is possible to tune model parameters to give satisfactory agreement with observed data, thereby highlighting the skill of modern sea ice models, though the parameter values chosen differ according to the model forcing used. We find a consistent decreasing trend in Arctic Ocean sea ice thickness since 1979, and a steady decline in the Eastern Arctic Ocean over the full 40-year period of comparison that accelerated after 1980, but the predictions of Western Arctic Ocean sea ice thickness between 1962 and 1980 differ substantially. The origins of differing thickness trends and variability were isolated not to parameter differences but to differences in the forcing fields applied, and in how they are applied. It is argued that uncertainty, differences and errors in sea ice model forcing sets complicate the use of models to determine the exact causes of the recently reported decline in Arctic sea ice thickness, but help in the determination of robust features if the models are tuned appropriately against observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiative forcing and climate sensitivity have been widely used as concepts to understand climate change. This work performs climate change experiments with an intermediate general circulation model (IGCM) to examine the robustness of the radiative forcing concept for carbon dioxide and solar constant changes. This IGCM has been specifically developed as a computationally fast model, but one that allows an interaction between physical processes and large-scale dynamics; the model allows many long integrations to be performed relatively quickly. It employs a fast and accurate radiative transfer scheme, as well as simple convection and surface schemes, and a slab ocean, to model the effects of climate change mechanisms on the atmospheric temperatures and dynamics with a reasonable degree of complexity. The climatology of the IGCM run at T-21 resolution with 22 levels is compared to European Centre for Medium Range Weather Forecasting Reanalysis data. The response of the model to changes in carbon dioxide and solar output are examined when these changes are applied globally and when constrained geographically (e.g. over land only). The CO2 experiments have a roughly 17% higher climate sensitivity than the solar experiments. It is also found that a forcing at high latitudes causes a 40% higher climate sensitivity than a forcing only applied at low latitudes. It is found that, despite differences in the model feedbacks, climate sensitivity is roughly constant over a range of distributions of CO2 and solar forcings. Hence, in the IGCM at least, the radiative forcing concept is capable of predicting global surface temperature changes to within 30%, for the perturbations described here. It is concluded that radiative forcing remains a useful tool for assessing the natural and anthropogenic impact of climate change mechanisms on surface temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decadal climate predictions exhibit large biases, which are often subtracted and forgotten. However, understanding the causes of bias is essential to guide efforts to improve prediction systems, and may offer additional benefits. Here the origins of biases in decadal predictions are investigated, including whether analysis of these biases might provide useful information. The focus is especially on the lead-time-dependent bias tendency. A “toy” model of a prediction system is initially developed and used to show that there are several distinct contributions to bias tendency. Contributions from sampling of internal variability and a start-time-dependent forcing bias can be estimated and removed to obtain a much improved estimate of the true bias tendency, which can provide information about errors in the underlying model and/or errors in the specification of forcings. It is argued that the true bias tendency, not the total bias tendency, should be used to adjust decadal forecasts. The methods developed are applied to decadal hindcasts of global mean temperature made using the Hadley Centre Coupled Model, version 3 (HadCM3), climate model, and it is found that this model exhibits a small positive bias tendency in the ensemble mean. When considering different model versions, it is shown that the true bias tendency is very highly correlated with both the transient climate response (TCR) and non–greenhouse gas forcing trends, and can therefore be used to obtain observationally constrained estimates of these relevant physical quantities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The time discretization in weather and climate models introduces truncation errors that limit the accuracy of the simulations. Recent work has yielded a method for reducing the amplitude errors in leapfrog integrations from first-order to fifth-order. This improvement is achieved by replacing the Robert--Asselin filter with the RAW filter and using a linear combination of the unfiltered and filtered states to compute the tendency term. The purpose of the present paper is to apply the composite-tendency RAW-filtered leapfrog scheme to semi-implicit integrations. A theoretical analysis shows that the stability and accuracy are unaffected by the introduction of the implicitly treated mode. The scheme is tested in semi-implicit numerical integrations in both a simple nonlinear stiff system and a medium-complexity atmospheric general circulation model, and yields substantial improvements in both cases. We conclude that the composite-tendency RAW-filtered leapfrog scheme is suitable for use in semi-implicit integrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evidence for anthropogenic climate change continues to strengthen, and concerns about severe weather events are increasing. As a result, scientific interest is rapidly shifting from detection and attribution of global climate change to prediction of its impacts at the regional scale. However, nearly everything we have any confidence in when it comes to climate change is related to global patterns of surface temperature, which are primarily controlled by thermodynamics. In contrast, we have much less confidence in atmospheric circulation aspects of climate change, which are primarily controlled by dynamics and exert a strong control on regional climate. Model projections of circulation-related fields, including precipitation, show a wide range of possible outcomes, even on centennial timescales. Sources of uncertainty include low-frequency chaotic variability and the sensitivity to model error of the circulation response to climate forcing. As the circulation response to external forcing appears to project strongly onto existing patterns of variability, knowledge of errors in the dynamics of variability may provide some constraints on model projections. Nevertheless, higher scientific confidence in circulation-related aspects of climate change will be difficult to obtain. For effective decision-making, it is necessary to move to a more explicitly probabilistic, risk-based approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydro-graph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.