39 resultados para computable general equilibrium modelling
Resumo:
The global vegetation response to climate and atmospheric CO2 changes between the last glacial maximum and recent times is examined using an equilibrium vegetation model (BIOME4), driven by output from 17 climate simulations from the Palaeoclimate Modelling Intercomparison Project. Features common to all of the simulations include expansion of treeless vegetation in high northern latitudes; southward displacement and fragmentation of boreal and temperate forests; and expansion of drought-tolerant biomes in the tropics. These features are broadly consistent with pollen-based reconstructions of vegetation distribution at the last glacial maximum. Glacial vegetation in high latitudes reflects cold and dry conditions due to the low CO2 concentration and the presence of large continental ice sheets. The extent of drought-tolerant vegetation in tropical and subtropical latitudes reflects a generally drier low-latitude climate. Comparisons of the observations with BIOME4 simulations, with and without consideration of the direct physiological effect of CO2 concentration on C3 photosynthesis, suggest an important additional role of low CO2 concentration in restricting the extent of forests, especially in the tropics. Global forest cover was overestimated by all models when climate change alone was used to drive BIOME4, and estimated more accurately when physiological effects of CO2 concentration were included. This result suggests that both CO2 effects and climate effects were important in determining glacial-interglacial changes in vegetation. More realistic simulations of glacial vegetation and climate will need to take into account the feedback effects of these structural and physiological changes on the climate.
Resumo:
almonella enterica serovar Typhimurium is an established model organism for Gram-negative, intracellular pathogens. Owing to the rapid spread of resistance to antibiotics among this group of pathogens, new approaches to identify suitable target proteins are required. Based on the genome sequence of Salmonella Typhimurium and associated databases, a genome-scale metabolic model was constructed. Output was based on an experimental determination of the biomass of Salmonella when growing in glucose minimal medium. Linear programming was used to simulate variations in energy demand, while growing in glucose minimal medium. By grouping reactions with similar flux responses, a sub-network of 34 reactions responding to this variation was identified (the catabolic core). This network was used to identify sets of one and two reactions, that when removed from the genome-scale model interfered with energy and biomass generation. 11 such sets were found to be essential for the production of biomass precursors. Experimental investigation of 7 of these showed that knock-outs of the associated genes resulted in attenuated growth for 4 pairs of reactions, while 3 single reactions were shown to be essential for growth.
Resumo:
Insect pollination benefits over three quarters of the world's major crops. There is growing concern that observed declines in pollinators may impact on production and revenues from animal pollinated crops. Knowing the distribution of pollinators is therefore crucial for estimating their availability to pollinate crops; however, in general, we have an incomplete knowledge of where these pollinators occur. We propose a method to predict geographical patterns of pollination service to crops, novel in two elements: the use of pollinator records rather than expert knowledge to predict pollinator occurrence, and the inclusion of the managed pollinator supply. We integrated a maximum entropy species distribution model (SDM) with an existing pollination service model (PSM) to derive the availability of pollinators for crop pollination. We used nation-wide records of wild and managed pollinators (honey bees) as well as agricultural data from Great Britain. We first calibrated the SDM on a representative sample of bee and hoverfly crop pollinator species, evaluating the effects of different settings on model performance and on its capacity to identify the most important predictors. The importance of the different predictors was better resolved by SDM derived from simpler functions, with consistent results for bees and hoverflies. We then used the species distributions from the calibrated model to predict pollination service of wild and managed pollinators, using field beans as a test case. The PSM allowed us to spatially characterize the contribution of wild and managed pollinators and also identify areas potentially vulnerable to low pollination service provision, which can help direct local scale interventions. This approach can be extended to investigate geographical mismatches between crop pollination demand and the availability of pollinators, resulting from environmental change or policy scenarios.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.
Resumo:
Climate has been changing in the last fifty years in China and will continue to change regardless any efforts for mitigation. Agriculture is a climate-dependent activity and highly sensitive to climate changes and climate variability. Understanding the interactions between climate change and agricultural production is essential for society stable development of China. The first mission is to fully understand how to predict future climate and link it with agriculture production system. In this paper, recent studies both domestic and international are reviewed in order to provide an overall image of the progress in climate change researches. The methods for climate change scenarios construction are introduced. The pivotal techniques linking crop model and climate models are systematically assessed and climate change impacts on Chinese crops yield among model results are summarized. The study found that simulated productions of grain crop inherit uncertainty from using different climate models, emission scenarios and the crops simulation models. Moreover, studies have different spatial resolutions, and methods for general circulation model (GCM) downscaling which increase the uncertainty for regional impacts assessment. However, the magnitude of change in crop production due to climate change (at 700 ppm CO2 eq correct) appears within ±10% for China in these assessments. In most literatures, the three cereal crop yields showed decline under climate change scenarios and only wheat in some region showed increase. Finally, the paper points out several gaps in current researches which need more studies to shorten the distance for objective recognizing the impacts of climate change on crops. The uncertainty for crop yield projection is associated with climate change scenarios, CO2 fertilization effects and adaptation options. Therefore, more studies on the fields such as free air CO2 enrichment experiment and practical adaptations implemented need to be carried out
Resumo:
Aeolian dust modelling has improved significantly over the last ten years and many institutions now consistently model dust uplift, transport and deposition in general circulation models (GCMs). However, the representation of dust in GCMs is highly variable between modelling communities due to differences in the uplift schemes employed and the representation of the global circulation that subsequently leads to dust deflation. In this study two different uplift schemes are incorporated in the same GCM. This approach enables a clearer comparison of the dust uplift schemes themselves, without the added complexity of several different transport and deposition models. The global annual mean dust aerosol optical depths (at 550 nm) using two different dust uplift schemes were found to be 0.014 and 0.023—both lying within the estimates from the AeroCom project. However, the models also have appreciably different representations of the dust size distribution adjacent to the West African coast and very different deposition at various sites throughout the globe. The different dust uplift schemes were also capable of influencing the modelled circulation, surface air temperature, and precipitation despite the use of prescribed sea surface temperatures. This has important implications for the use of dust models in AMIP-style (Atmospheric Modelling Intercomparison Project) simulations and Earth-system modelling.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
The impact of extreme sea ice initial conditions on modelled climate is analysed for a fully coupled atmosphere ocean sea ice general circulation model, the Hadley Centre climate model HadCM3. A control run is chosen as reference experiment with greenhouse gas concentration fixed at preindustrial conditions. Sensitivity experiments show an almost complete recovery from total removal or strong increase of sea ice after four years. Thus, uncertainties in initial sea ice conditions seem to be unimportant for climate modelling on decadal or longer time scales. When the initial conditions of the ocean mixed layer were adjusted to ice-free conditions, a few substantial differences remained for more than 15 model years. But these differences are clearly smaller than the uncertainty of the HadCM3 run and all the other 19 IPCC fourth assessment report climate model preindustrial runs. It is an important task to improve climate models in simulating the past sea ice variability to enable them to make reliable projections for the 21st century.