936 resultados para Microscopic simulation models
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.
Resumo:
Simulations of ozone loss rates using a three-dimensional chemical transport model and a box model during recent Antarctic and Arctic winters are compared with experimental loss rates. The study focused on the Antarctic winter 2003, during which the first Antarctic Match campaign was organized, and on Arctic winters 1999/2000, 2002/2003. The maximum ozone loss rates retrieved by the Match technique for the winters and levels studied reached 6 ppbv/sunlit hour and both types of simulations could generally reproduce the observations at 2-sigma error bar level. In some cases, for example, for the Arctic winter 2002/2003 at 475 K level, an excellent agreement within 1-sigma standard deviation level was obtained. An overestimation was also found with the box model simulation at some isentropic levels for the Antarctic winter and the Arctic winter 1999/2000, indicating an overestimation of chlorine activation in the model. Loss rates in the Antarctic show signs of saturation in September, which have to be considered in the comparison. Sensitivity tests were performed with the box model in order to assess the impact of kinetic parameters of the ClO-Cl2O2 catalytic cycle and total bromine content on the ozone loss rate. These tests resulted in a maximum change in ozone loss rates of 1.2 ppbv/sunlit hour, generally in high solar zenith angle conditions. In some cases, a better agreement was achieved with fastest photolysis of Cl2O2 and additional source of total inorganic bromine but at the expense of overestimation of smaller ozone loss rates derived later in the winter.
Resumo:
A great explanatory gap lies between the molecular pharmacology of psychoactive agents and the neurophysiological changes they induce, as recorded by neuroimaging modalities. Causally relating the cellular actions of psychoactive compounds to their influence on population activity is experimentally challenging. Recent developments in the dynamical modelling of neural tissue have attempted to span this explanatory gap between microscopic targets and their macroscopic neurophysiological effects via a range of biologically plausible dynamical models of cortical tissue. Such theoretical models allow exploration of neural dynamics, in particular their modification by drug action. The ability to theoretically bridge scales is due to a biologically plausible averaging of cortical tissue properties. In the resulting macroscopic neural field, individual neurons need not be explicitly represented (as in neural networks). The following paper aims to provide a non-technical introduction to the mean field population modelling of drug action and its recent successes in modelling anaesthesia.
Resumo:
The development of global magnetospheric models, such as Space Weather Modeling Framework (SWMF), which can accurately reproduce and track space weather processes has high practical utility. We present an interval on 5 June 1998, where the location of the polar cap boundary, or open-closed field line boundary (OCB), can be determined in the ionosphere using a combination of instruments during a period encompassing a sharp northward to southward interplanetary field turning. We present both point- and time-varying comparisons of the observed and simulated boundaries in the ionosphere and find that when using solely the coupled ideal magnetohydrodynamic magnetosphere-ionosphere model, the rate of change of the OCB to a southward turning of the interplanetary field is significantly faster than that computed from the observational data. However, when the inner magnetospheric module is incorporated, the modeling framework both qualitatively, and often quantitatively, reproduces many elements of the studied interval prior to an observed substorm onset. This result demonstrates that the physics of the inner magnetosphere is critical in shaping the boundary between open and closed field lines during periods of southward interplanetary magnetic field (IMF) and provides significant insight into the 3-D time-dependent behavior of the Earth's magnetosphere in response to a northward-southward IMF turning. We assert that during periods that do not include the tens of minutes surrounding substorm expansion phase onset, the coupled SWMF model may provide a valuable and reliable tool for estimating both the OCB and magnetic field topology over a wide range of latitudes and local times.
Resumo:
Earth system models are increasing in complexity and incorporating more processes than their predecessors, making them important tools for studying the global carbon cycle. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes, with coupled climate-carbon cycle models that represent land-use change simulating total land carbon stores by 2100 that vary by as much as 600 Pg C given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous model evaluation methodologies. Here we assess the state-of-the-art with respect to evaluation of Earth system models, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeo data and (ii) metrics for evaluation, and discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute towards the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but it is also a challenge, as more knowledge about data uncertainties is required in order to determine robust evaluation methodologies that move the field of ESM evaluation from "beauty contest" toward the development of useful constraints on model behaviour.
Resumo:
Models for water transfer in the crop-soil system are key components of agro-hydrological models for irrigation, fertilizer and pesticide practices. Many of the hydrological models for water transfer in the crop-soil system are either too approximate due to oversimplified algorithms or employ complex numerical schemes. In this paper we developed a simple and sufficiently accurate algorithm which can be easily adopted in agro-hydrological models for the simulation of water dynamics. We used a dual crop coefficient approach proposed by the FAO for estimating potential evaporation and transpiration, and a dynamic model for calculating relative root length distribution on a daily basis. In a small time step of 0.001 d, we implemented algorithms separately for actual evaporation, root water uptake and soil water content redistribution by decoupling these processes. The Richards equation describing soil water movement was solved using an integration strategy over the soil layers instead of complex numerical schemes. This drastically simplified the procedures of modeling soil water and led to much shorter computer codes. The validity of the proposed model was tested against data from field experiments on two contrasting soils cropped with wheat. Good agreement was achieved between measurement and simulation of soil water content in various depths collected at intervals during crop growth. This indicates that the model is satisfactory in simulating water transfer in the crop-soil system, and therefore can reliably be adopted in agro-hydrological models. Finally we demonstrated how the developed model could be used to study the effect of changes in the environment such as lowering the groundwater table caused by the construction of a motorway on crop transpiration. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
The Richards equation has been widely used for simulating soil water movement. However, the take-up of agro-hydrological models using the basic theory of soil water flow for optimizing irrigation, fertilizer and pesticide practices is still low. This is partly due to the difficulties in obtaining accurate values for soil hydraulic properties at a field scale. Here, we use an inverse technique to deduce the effective soil hydraulic properties, based on measuring the changes in the distribution of soil water with depth in a fallow field over a long period, subject to natural rainfall and evaporation using a robust micro Genetic Algorithm. A new optimized function was constructed from the soil water contents at different depths, and the soil water at field capacity. The deduced soil water retention curve was approximately parallel but higher than that derived from published pedo-tranfer functions for a given soil pressure head. The water contents calculated from the deduced soil hydraulic properties were in good agreement with the measured values. The reliability of the deduced soil hydraulic properties was tested in reproducing data measured from an independent experiment on the same soil cropped with leek. The calculation of root water uptake took account for both soil water potential and root density distribution. Results show that the predictions of soil water contents at various depths agree fairly well with the measurements, indicating that the inverse analysis is an effective and reliable approach to estimate soil hydraulic properties, and thus permits the simulation of soil water dynamics in both cropped and fallow soils in the field accurately. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We describe the main differences in simulations of stratospheric climate and variability by models within the fifth Coupled Model Intercomparison Project (CMIP5) that have a model top above the stratopause and relatively fine stratospheric vertical resolution (high-top), and those that have a model top below the stratopause (low-top). Although the simulation of mean stratospheric climate by the two model ensembles is similar, the low-top model ensemble has very weak stratospheric variability on daily and interannual time scales. The frequency of major sudden stratospheric warming events is strongly underestimated by the low-top models with less than half the frequency of events observed in the reanalysis data and high-top models. The lack of stratospheric variability in the low-top models affects their stratosphere-troposphere coupling, resulting in short-lived anomalies in the Northern Annular Mode, which do not produce long-lasting tropospheric impacts, as seen in observations. The lack of stratospheric variability, however, does not appear to have any impact on the ability of the low-top models to reproduce past stratospheric temperature trends. We find little improvement in the simulation of decadal variability for the high-top models compared to the low-top, which is likely related to the fact that neither ensemble produces a realistic dynamical response to volcanic eruptions.
Resumo:
The results of coupled high resolution global models (CGCMs) over South America are discussed. HiGEM1.2 and HadGEM1.2 simulations, with horizontal resolution of ~90 and 135 km, respectively, are compared. Precipitation estimations from CMAP (Climate Prediction Center—Merged Analysis of Precipitation), CPC (Climate Prediction Center) and GPCP (Global Precipitation Climatology Project) are used for validation. HiGEM1.2 and HadGEM1.2 simulated seasonal mean precipitation spatial patterns similar to the CMAP. The positioning and migration of the Intertropical Convergence Zone and of the Pacific and Atlantic subtropical highs are correctly simulated by the models. In HiGEM1.2 and HadGEM1.2, the intensity and locations of the South Atlantic Convergence Zone are in agreement with the observed dataset. The simulated annual cycles are in phase with estimations of rainfall for most of the six regions considered. An important result is that HiGEM1.2 and HadGEM1.2 eliminate a common problem of coarse resolution CGCMs, which is the simulation of a semiannual cycle of precipitation due to the semiannual solar forcing. Comparatively, the use of high resolution in HiGEM1.2 reduces the dry biases in the central part of Brazil during austral winter and spring and in most part of the year over an oceanic box in eastern Uruguay.
Resumo:
The primary role of land surface models embedded in climate models is to partition surface available energy into upwards, radiative, sensible and latent heat fluxes. Partitioning of evapotranspiration, ET, is of fundamental importance: as a major component of the total surface latent heat flux, ET affects the simulated surface water balance, and related energy balance, and consequently the feedbacks with the atmosphere. In this context it is also crucial to credibly represent the CO2 exchange between ecosystems and their environment. In this study, JULES, the land surface model used in UK weather and climate models, has been evaluated for temperate Europe. Compared to eddy covariance flux measurements, the CO2 uptake by the ecosystem is underestimated and the ET overestimated. In addition, the contribution to ET from soil and intercepted water evaporation far outweighs the contribution of plant transpiration. To alleviate these biases, adaptations have been implemented in JULES, based on key literature references. These adaptations have improved the simulation of the spatio-temporal variability of the fluxes and the accuracy of the simulated GPP and ET, including its partitioning. This resulted in a shift of the seasonal soil moisture cycle. These adaptations are expected to increase the fidelity of climate simulations over Europe. Finally, the extreme summer of 2003 was used as evaluation benchmark for the use of the model in climate change studies. The improved model captures the impact of the 2003 drought on the carbon assimilation and the water use efficiency of the plants. It, however, underestimates the 2003 GPP anomalies. The simulations showed that a reduction of evaporation from the interception and soil reservoirs, albeit not of transpiration, largely explained the good correlation between the carbon and the water fluxes anomalies that was observed during 2003. This demonstrates the importance of being able to discriminate the response of individual component of the ET flux to environmental forcing.
Resumo:
The parameterization of surface heat-flux variability in urban areas relies on adequate representation of surface characteristics. Given the horizontal resolutions (e.g. ≈0.1–1km) currently used in numerical weather prediction (NWP) models, properties of the urban surface (e.g. vegetated/built surfaces, street-canyon geometries) often have large spatial variability. Here, a new approach based on Urban Zones to characterize Energy partitioning (UZE) is tested within a NWP model (Weather Research and Forecasting model;WRF v3.2.1) for Greater London. The urban land-surface scheme is the Noah/Single-Layer Urban Canopy Model (SLUCM). Detailed surface information (horizontal resolution 1 km)in central London shows that the UZE offers better characterization of surface properties and their variability compared to default WRF-SLUCM input parameters. In situ observations of the surface energy fluxes and near-surface meteorological variables are used to select the radiation and turbulence parameterization schemes and to evaluate the land-surface scheme
Resumo:
This paper presents single-column model (SCM) simulations of a tropical squall-line case observed during the Coupled Ocean-Atmosphere Response Experiment of the Tropical Ocean/Global Atmosphere Programme. This case-study was part of an international model intercomparison project organized by Working Group 4 ‘Precipitating Convective Cloud Systems’ of the GEWEX (Global Energy and Water-cycle Experiment) Cloud System Study. Eight SCM groups using different deep-convection parametrizations participated in this project. The SCMs were forced by temperature and moisture tendencies that had been computed from a reference cloud-resolving model (CRM) simulation using open boundary conditions. The comparison of the SCM results with the reference CRM simulation provided insight into the ability of current convection and cloud schemes to represent organized convection. The CRM results enabled a detailed evaluation of the SCMs in terms of the thermodynamic structure and the convective mass flux of the system, the latter being closely related to the surface convective precipitation. It is shown that the SCMs could reproduce reasonably well the time evolution of the surface convective and stratiform precipitation, the convective mass flux, and the thermodynamic structure of the squall-line system. The thermodynamic structure simulated by the SCMs depended on how the models partitioned the precipitation between convective and stratiform. However, structural differences persisted in the thermodynamic profiles simulated by the SCMs and the CRM. These differences could be attributed to the fact that the total mass flux used to compute the SCM forcing differed from the convective mass flux. The SCMs could not adequately represent these organized mesoscale circulations and the microphysicallradiative forcing associated with the stratiform region. This issue is generally known as the ‘scale-interaction’ problem that can only be properly addressed in fully three-dimensional simulations. Sensitivity simulations run by several groups showed that the time evolution of the surface convective precipitation was considerably smoothed when the convective closure was based on convective available potential energy instead of moisture convergence. Finally, additional SCM simulations without using a convection parametrization indicated that the impact of a convection parametrization in forced SCM runs was more visible in the moisture profiles than in the temperature profiles because convective transport was particularly important in the moisture budget.
Resumo:
Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.
Resumo:
Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.