48 resultados para Average model
Resumo:
The impact of climate change on wind power generation potentials over Europe is investigated by considering ensemble projections from two regional climate models (RCMs) driven by a global climate model (GCM). Wind energy density and its interannual variability are estimated based on hourly near-surface wind speeds. Additionally, the possible impact of climatic changes on the energy output of a sample 2.5-MW turbine is discussed. GCM-driven RCM simulations capture the behavior and variability of current wind energy indices, even though some differences exist when compared with reanalysis-driven RCM simulations. Toward the end of the twenty-first century, projections show significant changes of energy density on annual average across Europe that are substantially stronger in seasonal terms. The emergence time of these changes varies from region to region and season to season, but some long-term trends are already statistically significant in the middle of the twenty-first century. Over northern and central Europe, the wind energy potential is projected to increase, particularly in winter and autumn. In contrast, energy potential over southern Europe may experience a decrease in all seasons except for the Aegean Sea. Changes for wind energy output follow the same patterns but are of smaller magnitude. The GCM/RCM model chains project a significant intensification of both interannual and intra-annual variability of energy density over parts of western and central Europe, thus imposing new challenges to a reliable pan-European energy supply in future decades.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
The global cycle of multicomponent aerosols including sulfate, black carbon (BC),organic matter (OM), mineral dust, and sea salt is simulated in the Laboratoire de Me´te´orologie Dynamique general circulation model (LMDZT GCM). The seasonal open biomass burning emissions for simulation years 2000–2001 are scaled from climatological emissions in proportion to satellite detected fire counts. The emissions of dust and sea salt are parameterized online in the model. The comparison of model-predicted monthly mean aerosol optical depth (AOD) at 500 nm with Aerosol Robotic Network (AERONET) shows good agreement with a correlation coefficient of 0.57(N = 1324) and 76% of data points falling within a factor of 2 deviation. The correlation coefficient for daily mean values drops to 0.49 (N = 23,680). The absorption AOD (ta at 670 nm) estimated in the model is poorly correlated with measurements (r = 0.27, N = 349). It is biased low by 24% as compared to AERONET. The model reproduces the prominent features in the monthly mean AOD retrievals from Moderate Resolution Imaging Spectroradiometer (MODIS). The agreement between the model and MODIS is better over source and outflow regions (i.e., within a factor of 2).There is an underestimation of the model by up to a factor of 3 to 5 over some remote oceans. The largest contribution to global annual average AOD (0.12 at 550 nm) is from sulfate (0.043 or 35%), followed by sea salt (0.027 or 23%), dust (0.026 or 22%),OM (0.021 or 17%), and BC (0.004 or 3%). The atmospheric aerosol absorption is predominantly contributed by BC and is about 3% of the total AOD. The globally and annually averaged shortwave (SW) direct aerosol radiative perturbation (DARP) in clear-sky conditions is �2.17 Wm�2 and is about a factor of 2 larger than in all-sky conditions (�1.04 Wm�2). The net DARP (SW + LW) by all aerosols is �1.46 and �0.59 Wm�2 in clear- and all-sky conditions, respectively. Use of realistic, less absorbing in SW, optical properties for dust results in negative forcing over the dust-dominated regions.
Resumo:
A process-based fire regime model (SPITFIRE) has been developed, coupled with ecosystem dynamics in the LPJ Dynamic Global Vegetation Model, and used to explore fire regimes and the current impact of fire on the terrestrial carbon cycle and associated emissions of trace atmospheric constituents. The model estimates an average release of 2.24 Pg C yr−1 as CO2 from biomass burning during the 1980s and 1990s. Comparison with observed active fire counts shows that the model reproduces where fire occurs and can mimic broad geographic patterns in the peak fire season, although the predicted peak is 1–2 months late in some regions. Modelled fire season length is generally overestimated by about one month, but shows a realistic pattern of differences among biomes. Comparisons with remotely sensed burnt-area products indicate that the model reproduces broad geographic patterns of annual fractional burnt area over most regions, including the boreal forest, although interannual variability in the boreal zone is underestimated.
Resumo:
Climate change due to anthropogenic greenhouse gas emissions is expected to increase the frequency and intensity of precipitation events, which is likely to affect the probability of flooding into the future. In this paper we use river flow simulations from nine global hydrology and land surface models to explore uncertainties in the potential impacts of climate change on flood hazard at global scale. As an indicator of flood hazard we looked at changes in the 30-y return level of 5-d average peak flows under representative concentration pathway RCP8.5 at the end of this century. Not everywhere does climate change result in an increase in flood hazard: decreases in the magnitude and frequency of the 30-y return level of river flow occur at roughly one-third (20-45%) of the global land grid points, particularly in areas where the hydro-graph is dominated by the snowmelt flood peak in spring. In most model experiments, however, an increase in flooding frequency was found in more than half of the grid points. The current 30-y flood peak is projected to occur in more than 1 in 5 y across 5-30% of land grid points. The large-scale patterns of change are remarkably consistent among impact models and even the driving climate models, but at local scale and in individual river basins there can be disagreement even on the sign of change, indicating large modeling uncertainty which needs to be taken into account in local adaptation studies.
Resumo:
Spatially dense observations of gust speeds are necessary for various applications, but their availability is limited in space and time. This work presents an approach to help to overcome this problem. The main objective is the generation of synthetic wind gust velocities. With this aim, theoretical wind and gust distributions are estimated from 10 yr of hourly observations collected at 123 synoptic weather stations provided by the German Weather Service. As pre-processing, an exposure correction is applied on measurements of the mean wind velocity to reduce the influence of local urban and topographic effects. The wind gust model is built as a transfer function between distribution parameters of wind and gust velocities. The aim of this procedure is to estimate the parameters of gusts at stations where only wind speed data is available. These parameters can be used to generate synthetic gusts, which can improve the accuracy of return periods at test sites with a lack of observations. The second objective is to determine return periods much longer than the nominal length of the original time series by considering extreme value statistics. Estimates for both local maximum return periods and average return periods for single historical events are provided. The comparison of maximum and average return periods shows that even storms with short average return periods may lead to local wind gusts with return periods of several decades. Despite uncertainties caused by the short length of the observational records, the method leads to consistent results, enabling a wide range of possible applications.
Resumo:
Traditionally, the cusp has been described in terms of a time-stationary feature of the magnetosphere which allows access of magnetosheath-like plasma to low altitudes. Statistical surveys of data from low-altitude spacecraft have shown the average characteristics and position of the cusp. Recently, however, it has been suggested that the ionospheric footprint of flux transfer events (FTEs) may be identified as variations of the “cusp” on timescales of a few minutes. In this model, the cusp can vary in form between a steady-state feature in one limit and a series of discrete ionospheric FTE signatures in the other limit. If this time-dependent cusp scenario is correct, then the signatures of the transient reconnection events must be able, on average, to reproduce the statistical cusp occurrence previously determined from the satellite observations. In this paper, we predict the precipitation signatures which are associated with transient magnetopause reconnection, following recent observations of the dependence of dayside ionospheric convection on the orientation of the IMF. We then employ a simple model of the longitudinal motion of FTE signatures to show how such events can easily reproduce the local time distribution of cusp occurrence probabilities, as observed by low-altitude satellites. This is true even in the limit where the cusp is a series of discrete events. Furthermore, we investigate the existence of double cusp patches predicted by the simple model and show how these events may be identified in the data.
Resumo:
The implications of polar cap expansions, contractions and movements for empirical models of high-latitude plasma convection are examined. Some of these models have been generated by directly averaging flow measurements from large numbers of satellite passes or radar scans; others have employed more complex means to combine data taken at different times into large-scale patterns of flow. In all cases, the models have implicitly adopted the assumption that the polar cap is in steady state: they have all characterized the ionospheric flow in terms of the prevailing conditions (e.g. the interplanetary magnetic field and/or some index of terrestrial magnetic activity) without allowance for their history. On long enough time scales, the polar cap is indeed in steady state but on time scales shorter than a few hours it is not and can oscillate in size and position. As a result, the method used to combine the data can influence the nature of the convection reversal boundary and the transpolar voltage in the derived model. This paper discusses a variety of effects due to time-dependence in relation to some ionospheric convection models which are widely applied. The effects are shown to be varied and to depend upon the procedure adopted to compile the model.
Resumo:
The transport of ionospheric ions from a source in the polar cleft ionosphere through the polar magnetosphere is investigated using a two-dimensional, kinetic, trajectory-based code. The transport model includes the effects of gravitation, longitudinal magnetic gradient force, convection electric fields, and parallel electric fields. Individual ion trajectories as well as distribution functions and resulting bulk parameters of density, parallel average energy, and parallel flux for a presumed cleft ionosphere source distribution are presented for various conditions to illustrate parametrically the dependences on source energies, convection electric field strengths, ion masses, and parallel electric field strengths. The essential features of the model are consistent with the concept of a cleft-based ion fountain supplying ionospheric ions to the polar magnetosphere, and the resulting plasma distributions and parameters are in general agreement with recent low-energy ion measurements from the DE 1 satellite.
Resumo:
Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds) and CO. When these ozone changes are used to calculate radiative forcing (RF) (and climate metrics such as the global warming potential (GWP) and global temperature-change potential (GTP)) there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane) concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia). We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field) are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3 times larger using the ensemble-mean fields than using the individual models to calculate the RF. The source of this effect is largely due to the construction of the input ozone fields, which overestimate the true ensemble spread. Hence, while the average of multi-model fields are normally appropriate for calculating mean RF, GWP and GTP, they are not a reliable method for calculating the uncertainty in these fields, and in general overestimate the uncertainty.
Resumo:
Climate controls fire regimes through its influence on the amount and types of fuel present and their dryness. CO2 concentration constrains primary production by limiting photosynthetic activity in plants. However, although fuel accumulation depends on biomass production, and hence on CO2 concentration, the quantitative relationship between atmospheric CO2 concentration and biomass burning is not well understood. Here a fire-enabled dynamic global vegetation model (the Land surface Processes and eXchanges model, LPX) is used to attribute glacial–interglacial changes in biomass burning to an increase in CO2, which would be expected to increase primary production and therefore fuel loads even in the absence of climate change, vs. climate change effects. Four general circulation models provided last glacial maximum (LGM) climate anomalies – that is, differences from the pre-industrial (PI) control climate – from the Palaeoclimate Modelling Intercomparison Project Phase~2, allowing the construction of four scenarios for LGM climate. Modelled carbon fluxes from biomass burning were corrected for the model's observed prediction biases in contemporary regional average values for biomes. With LGM climate and low CO2 (185 ppm) effects included, the modelled global flux at the LGM was in the range of 1.0–1.4 Pg C year-1, about a third less than that modelled for PI time. LGM climate with pre-industrial CO2 (280 ppm) yielded unrealistic results, with global biomass burning fluxes similar to or even greater than in the pre-industrial climate. It is inferred that a substantial part of the increase in biomass burning after the LGM must be attributed to the effect of increasing CO2 concentration on primary production and fuel load. Today, by analogy, both rising CO2 and global warming must be considered as risk factors for increasing biomass burning. Both effects need to be included in models to project future fire risks.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
The concentrations of sulfate, black carbon (BC) and other aerosols in the Arctic are characterized by high values in late winter and spring (so-called Arctic Haze) and low values in summer. Models have long been struggling to capture this seasonality and especially the high concentrations associated with Arctic Haze. In this study, we evaluate sulfate and BC concentrations from eleven different models driven with the same emission inventory against a comprehensive pan-Arctic measurement data set over a time period of 2 years (2008–2009). The set of models consisted of one Lagrangian particle dispersion model, four chemistry transport models (CTMs), one atmospheric chemistry-weather forecast model and five chemistry climate models (CCMs), of which two were nudged to meteorological analyses and three were running freely. The measurement data set consisted of surface measurements of equivalent BC (eBC) from five stations (Alert, Barrow, Pallas, Tiksi and Zeppelin), elemental carbon (EC) from Station Nord and Alert and aircraft measurements of refractory BC (rBC) from six different campaigns. We find that the models generally captured the measured eBC or rBC and sulfate concentrations quite well, compared to previous comparisons. However, the aerosol seasonality at the surface is still too weak in most models. Concentrations of eBC and sulfate averaged over three surface sites are underestimated in winter/spring in all but one model (model means for January–March underestimated by 59 and 37 % for BC and sulfate, respectively), whereas concentrations in summer are overestimated in the model mean (by 88 and 44 % for July–September), but with overestimates as well as underestimates present in individual models. The most pronounced eBC underestimates, not included in the above multi-site average, are found for the station Tiksi in Siberia where the measured annual mean eBC concentration is 3 times higher than the average annual mean for all other stations. This suggests an underestimate of BC sources in Russia in the emission inventory used. Based on the campaign data, biomass burning was identified as another cause of the modeling problems. For sulfate, very large differences were found in the model ensemble, with an apparent anti-correlation between modeled surface concentrations and total atmospheric columns. There is a strong correlation between observed sulfate and eBC concentrations with consistent sulfate/eBC slopes found for all Arctic stations, indicating that the sources contributing to sulfate and BC are similar throughout the Arctic and that the aerosols are internally mixed and undergo similar removal. However, only three models reproduced this finding, whereas sulfate and BC are weakly correlated in the other models. Overall, no class of models (e.g., CTMs, CCMs) performed better than the others and differences are independent of model resolution.
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing observation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14 °C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.