152 resultados para Mass-consistent model
Resumo:
The Wetland and Wetland CH4 Intercomparison of Models Project (WETCHIMP) was created to evaluate our present ability to simulate large-scale wetland characteristics and corresponding methane (CH4) emissions. A multi-model comparison is essential to evaluate the key uncertainties in the mechanisms and parameters leading to methane emissions. Ten modelling groups joined WETCHIMP to run eight global and two regional models with a common experimental protocol using the same climate and atmospheric carbon dioxide (CO2) forcing datasets. We reported the main conclusions from the intercomparison effort in a companion paper (Melton et al., 2013). Here we provide technical details for the six experiments, which included an equilibrium, a transient, and an optimized run plus three sensitivity experiments (temperature, precipitation, and atmospheric CO2 concentration). The diversity of approaches used by the models is summarized through a series of conceptual figures, and is used to evaluate the wide range of wetland extent and CH4 fluxes predicted by the models in the equilibrium run. We discuss relationships among the various approaches and patterns in consistencies of these model predictions. Within this group of models, there are three broad classes of methods used to estimate wetland extent: prescribed based on wetland distribution maps, prognostic relationships between hydrological states based on satellite observations, and explicit hydrological mass balances. A larger variety of approaches was used to estimate the net CH4 fluxes from wetland systems. Even though modelling of wetland extent and CH4 emissions has progressed significantly over recent decades, large uncertainties still exist when estimating CH4 emissions: there is little consensus on model structure or complexity due to knowledge gaps, different aims of the models, and the range of temporal and spatial resolutions of the models.
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd ) compares relatively well to the satellite data at least over the ocean. The relationship between �a and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and �a as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld–�a relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between �a and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - �a relationship show a strong positive correlation between �a and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of �a, and parameterisation assumptions such as a lower bound on Nd . Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of −1.5±0.5Wm−2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clearand cloudy-sky forcings with estimates of anthropogenic �a and satellite-retrieved Nd–�a regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of −0.4±0.2Wm−2 and a cloudy-sky (aerosol indirect effect) estimate of −0.7±0.5Wm−2, with a total estimate of −1.2±0.4Wm−2.
Resumo:
Aerosol sources, transport, and sinks are simulated, and aerosol direct radiative effects are assessed over the Indian Ocean for the Indian Ocean Experiment (INDOEX) Intensive Field Phase during January to March 1999 using the Laboratoire de Me´te´orologie Dynamique (LMDZT) general circulation model. The model reproduces the latitudinal gradient in aerosol mass concentration and optical depth (AOD). The model-predicted aerosol concentrations and AODs agree reasonably well with measurements but are systematically underestimated during high-pollution episodes, especially in the month of March. The largest aerosol loads are found over southwestern China, the Bay of Bengal, and the Indian subcontinent. Aerosol emissions from the Indian subcontinent are transported into the Indian Ocean through either the west coast or the east coast of India. Over the INDOEX region, carbonaceous aerosols are the largest contributor to the estimated AOD, followed by sulfate, dust, sea salt, and fly ash. During the northeast winter monsoon, natural and anthropogenic aerosols reduce the solar flux reaching the surface by 25 W m�2, leading to 10–15% less insolation at the surface. A doubling of black carbon (BC) emissions from Asia results in an aerosol single-scattering albedo that is much smaller than in situ measurements, reflecting the fact that BC emissions are not underestimated in proportion to other (mostly scattering) aerosol types. South Asia is the dominant contributor to sulfate aerosols over the INDOEX region and accounts for 60–70% of the AOD by sulfate. It is also an important but not the dominant contributor to carbonaceous aerosols over the INDOEX region with a contribution of less than 40% to the AOD by this aerosol species. The presence of elevated plumes brings significant quantities of aerosols to the Indian Ocean that are generated over Africa and Southeast and east Asia.
Resumo:
[1] During the Northern Hemisphere summer, absorbed solar radiation melts snow and the upper surface of Arctic sea ice to generate meltwater that accumulates in ponds. The melt ponds reduce the albedo of the sea ice cover during the melting season, with a significant impact on the heat and mass budget of the sea ice and the upper ocean. We have developed a model, designed to be suitable for inclusion into a global circulation model (GCM), which simulates the formation and evolution of the melt pond cover. In order to be compatible with existing GCM sea ice models, our melt pond model builds upon the existing theory of the evolution of the sea ice thickness distribution. Since this theory does not describe the topography of the ice cover, which is crucial to determining the location, extent, and depth of individual ponds, we have needed to introduce some assumptions. We describe our model, present calculations and a sensitivity analysis, and discuss our results.
Resumo:
This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.
Resumo:
During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.
Resumo:
A mathematical model incorporating many of the important processes at work in the crystallization of emulsions is presented. The model describes nucleation within the discontinuous domain of an emulsion, precipitation in the continuous domain, transport of monomers between the two domains, and formation and subsequent growth of crystals in both domains. The model is formulated as an autonomous system of nonlinear, coupled ordinary differential equations. The description of nucleation and precipitation is based upon the Becker–Döring equations of classical nucleation theory. A particular feature of the model is that the number of particles of all species present is explicitly conserved; this differs from work that employs Arrhenius descriptions of nucleation rate. Since the model includes many physical effects, it is analyzed in stages so that the role of each process may be understood. When precipitation occurs in the continuous domain, the concentration of monomers falls below the equilibrium concentration at the surface of the drops of the discontinuous domain. This leads to a transport of monomers from the drops into the continuous domain that are then incorporated into crystals and nuclei. Since the formation of crystals is irreversible and their subsequent growth inevitable, crystals forming in the continuous domain effectively act as a sink for monomers “sucking” monomers from the drops. In this case, numerical calculations are presented which are consistent with experimental observations. In the case in which critical crystal formation does not occur, the stationary solution is found and a linear stability analysis is performed. Bifurcation diagrams describing the loci of stationary solutions, which may be multiple, are numerically calculated.
Resumo:
A one-dimensional, thermodynamic, and radiative model of a melt pond on sea ice is presented that explicitly treats the melt pond as an extra phase. A two-stream radiation model, which allows albedo to be determined from bulk optical properties, and a parameterization of the summertime evolution of optical properties, is used. Heat transport within the sea ice is described using an equation describing heat transport in a mushy layer of a binary alloy (salt water). The model is tested by comparison of numerical simulations with SHEBA data and previous modeling. The presence of melt ponds on the sea ice surface is demonstrated to have a significant effect on the heat and mass balance. Sensitivity tests indicate that the maximum melt pond depth is highly sensitive to optical parameters and drainage. INDEX TERMS: 4207 Oceanography: General: Arctic and Antarctic oceanography; 4255 Oceanography: General: Numerical modeling; 4299 Oceanography: General: General or miscellaneous; KEYWORDS: sea ice, melt pond, albedo, Arctic Ocean, radiation model, thermodynamic
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
In its default configuration, the Hadley Centre climate model (GA2.0) simulates roughly one-half the observed level of Madden–Julian oscillation activity, with MJO events often lasting fewer than seven days. We use initialised, climate-resolution hindcasts to examine the sensitivity of the GA2.0 MJO to a range of changes in sub-grid parameterisations and model configurations. All 22 changes are tested for two cases during the Years of Tropical Convection. Improved skill comes only from (a) disabling vertical momentum transport by convection and (b) increasing mixing entrainment and detrainment for deep and mid-level convection. These changes are subsequently tested in a further 14 hindcast cases; only (b) consistently improves MJO skill, from 12 to 22 days. In a 20-year integration, (b) produces near-observed levels of MJO activity, but propagation through the Maritime Continent remains weak. With default settings, GA2.0 produces precipitation too readily, even in anomalously dry columns. Implementing (b) decreases the efficiency of convection, permitting instability to build during the suppressed MJO phase and producing a more favourable environment for the active phase. The distribution of daily rain rates is more consistent with satellite data; default entrainment produces 6–12 mm/day too frequently. These results are consistent with recent studies showing that greater sensitivity of convection to moisture improves the representation of the MJO.
Resumo:
Using an asymptotic expansion, a balance model is derived for the shallow-water equations (SWE) on the equatorial beta-plane that is valid for planetary-scale equatorial dynamics and includes Kelvin waves. In contrast to many theories of tropical dynamics, neither a strict balance between diabatic heating and vertical motion nor a small Froude number is required. Instead, the expansion is based on the smallness of the ratio of meridional to zonal length scales, which can also be interpreted as a separation in time scale. The leading-order model is characterized by a semigeostrophic balance between the zonal wind and meridional pressure gradient, while the meridional wind v vanishes; the model is thus asymptotically nondivergent, and the nonzero correction to v can be found at the next order. Importantly for applications, the diagnostic balance relations are linear for winds when inferring the wind field from mass observations and the winds can be diagnosed without direct observations of diabatic heating. The accuracy of the model is investigated through a set of numerical examples. These examples show that the diagnostic balance relations can remain valid even when the dynamics do not, and the balance dynamics can capture the slow behavior of a rapidly varying solution.
Resumo:
Diabatic processes can alter Rossby wave structure; consequently errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root mean square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter-season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead time up to about five days, consistent with under-representation of diabatic modification and transport of air from the lower troposphere into upper-tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.
Resumo:
We study the solutions of the Smoluchowski coagulation equation with a regularization term which removes clusters from the system when their mass exceeds a specified cutoff size, M. We focus primarily on collision kernels which would exhibit an instantaneous gelation transition in the absence of any regularization. Numerical simulations demonstrate that for such kernels with monodisperse initial data, the regularized gelation time decreasesas M increases, consistent with the expectation that the gelation time is zero in the unregularized system. This decrease appears to be a logarithmically slow function of M, indicating that instantaneously gelling kernels may still be justifiable as physical models despite the fact that they are highly singular in the absence of a cutoff. We also study the case when a source of monomers is introduced in the regularized system. In this case a stationary state is reached. We present a complete analytic description of this regularized stationary state for the model kernel, K(m1,m2)=max{m1,m2}ν, which gels instantaneously when M→∞ if ν>1. The stationary cluster size distribution decays as a stretched exponential for small cluster sizes and crosses over to a power law decay with exponent ν for large cluster sizes. The total particle density in the stationary state slowly vanishes as [(ν−1)logM]−1/2 when M→∞. The approach to the stationary state is nontrivial: Oscillations about the stationary state emerge from the interplay between the monomer injection and the cutoff, M, which decay very slowly when M is large. A quantitative analysis of these oscillations is provided for the addition model which describes the situation in which clusters can only grow by absorbing monomers.
Resumo:
The ability of the HiGEM climate model to represent high-impact, regional, precipitation events is investigated in two ways. The first focusses on a case study of extreme regional accumulation of precipitation during the passage of a summer extra-tropical cyclone across southern England on 20 July 2007 that resulted in a national flooding emergency. The climate model is compared with a global Numerical Weather Prediction (NWP) model and higher resolution, nested limited area models. While the climate model does not simulate the timing and location of the cyclone and associated precipitation as accurately as the NWP simulations, the total accumulated precipitation in all models is similar to the rain gauge estimate across England and Wales. The regional accumulation over the event is insensitive to horizontal resolution for grid spacings ranging from 90km to 4km. Secondly, the free-running climate model reproduces the statistical distribution of daily precipitation accumulations observed in the England-Wales precipitation record. The model distribution diverges increasingly from the record for longer accumulation periods with a consistent under-representation of more intense multi-day accumulations. This may indicate a lack of low-frequency variability associated with weather regime persistence. Despite this, the overall seasonal and annual precipitation totals from the model are still comparable to those from ERA-Interim.