941 resultados para land evaluate system
Resumo:
To bridge the gaps between traditional mesoscale modelling and microscale modelling, the National Center for Atmospheric Research, in collaboration with other agencies and research groups, has developed an integrated urban modelling system coupled to the weather research and forecasting (WRF) model as a community tool to address urban environmental issues. The core of this WRF/urban modelling system consists of the following: (1) three methods with different degrees of freedom to parameterize urban surface processes, ranging from a simple bulk parameterization to a sophisticated multi-layer urban canopy model with an indoor–outdoor exchange sub-model that directly interacts with the atmospheric boundary layer, (2) coupling to fine-scale computational fluid dynamic Reynolds-averaged Navier–Stokes and Large-Eddy simulation models for transport and dispersion (T&D) applications, (3) procedures to incorporate high-resolution urban land use, building morphology, and anthropogenic heating data using the National Urban Database and Access Portal Tool (NUDAPT), and (4) an urbanized high-resolution land data assimilation system. This paper provides an overview of this modelling system; addresses the daunting challenges of initializing the coupled WRF/urban model and of specifying the potentially vast number of parameters required to execute the WRF/urban model; explores the model sensitivity to these urban parameters; and evaluates the ability of WRF/urban to capture urban heat islands, complex boundary-layer structures aloft, and urban plume T&D for several major metropolitan regions. Recent applications of this modelling system illustrate its promising utility, as a regional climate-modelling tool, to investigate impacts of future urbanization on regional meteorological conditions and on air quality under future climate change scenarios. Copyright © 2010 Royal Meteorological Society
Resumo:
Urbanization, the expansion of built-up areas, is an important yet less-studied aspect of land use/land cover change in climate science. To date, most global climate models used to evaluate effects of land use/land cover change on climate do not include an urban parameterization. Here, the authors describe the formulation and evaluation of a parameterization of urban areas that is incorporated into the Community Land Model, the land surface component of the Community Climate System Model. The model is designed to be simple enough to be compatible with structural and computational constraints of a land surface model coupled to a global climate model yet complex enough to explore physically based processes known to be important in determining urban climatology. The city representation is based upon the “urban canyon” concept, which consists of roofs, sunlit and shaded walls, and canyon floor. The canyon floor is divided into pervious (e.g., residential lawns, parks) and impervious (e.g., roads, parking lots, sidewalks) fractions. Trapping of longwave radiation by canyon surfaces and solar radiation absorption and reflection is determined by accounting for multiple reflections. Separate energy balances and surface temperatures are determined for each canyon facet. A one-dimensional heat conduction equation is solved numerically for a 10-layer column to determine conduction fluxes into and out of canyon surfaces. Model performance is evaluated against measured fluxes and temperatures from two urban sites. Results indicate the model does a reasonable job of simulating the energy balance of cities.
Resumo:
The catchment of the River Thames, the principal river system in southern England, provides the main water supply for London but is highly vulnerable to changes in climate, land use and population. The river is eutrophic with significant algal blooms with phosphorus assumed to be the primary chemical indicator of ecosystem health. In the Thames Basin, phosphorus is available from point sources such as wastewater treatment plants and from diffuse sources such as agriculture. In order to predict vulnerability to future change, the integrated catchments model for phosphorus (INCA-P) has been applied to the river basin and used to assess the cost-effectiveness of a range of mitigation and adaptation strategies. It is shown that scenarios of future climate and land-use change will exacerbate the water quality problems, but a range of mitigation measures can improve the situation. A cost-effectiveness study has been undertaken to compare the economic benefits of each mitigation measure and to assess the phosphorus reductions achieved. The most effective strategy is to reduce fertilizer use by 20% together with the treatment of effluent to a high standard. Such measures will reduce the instream phosphorus concentrations to close to the EU Water Framework Directive target for the Thames.
Resumo:
This paper reports the results of a 2-year study of water quality in the River Enborne, a rural river in lowland England. Concentrations of nitrogen and phosphorus species and other chemical determinands were monitored both at high-frequency (hourly), using automated in situ instrumentation, and by manual weekly sampling and laboratory analysis. The catchment land use is largely agricultural, with a population density of 123 persons km−2. The river water is largely derived from calcareous groundwater, and there are high nitrogen and phosphorus concentrations. Agricultural fertiliser is the dominant source of annual loads of both nitrogen and phosphorus. However, the data show that sewage effluent discharges have a disproportionate effect on the river nitrogen and phosphorus dynamics. At least 38% of the catchment population use septic tank systems, but the effects are hard to quantify as only 6% are officially registered, and the characteristics of the others are unknown. Only 4% of the phosphorus input and 9% of the nitrogen input is exported from the catchment by the river, highlighting the importance of catchment process understanding in predicting nutrient concentrations. High-frequency monitoring will be a key to developing this vital process understanding.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.
Resumo:
Fire is an important component of the Earth System that is tightly coupled with climate, vegetation, biogeochemical cycles, and human activities. Observations of how fire regimes change on seasonal to millennial timescales are providing an improved understanding of the hierarchy of controls on fire regimes. Climate is the principal control on fire regimes, although human activities have had an increasing influence on the distribution and incidence of fire in recent centuries. Understanding of the controls and variability of fire also underpins the development of models, both conceptual and numerical, that allow us to predict how future climate and land-use changes might influence fire regimes. Although fires in fire-adapted ecosystems can be important for biodiversity and ecosystem function, positive effects are being increasingly outweighed by losses of ecosystem services. As humans encroach further into the natural habitat of fire, social and economic costs are also escalating. The prospect of near-term rapid and large climate changes, and the escalating costs of large wildfires, necessitates a radical re-thinking and the development of approaches to fire management that promote the more harmonious co-existence of fire and people.
Resumo:
Earth system models (ESMs) are increasing in complexity by incorporating more processes than their predecessors, making them potentially important tools for studying the evolution of climate and associated biogeochemical cycles. However, their coupled behaviour has only recently been examined in any detail, and has yielded a very wide range of outcomes. For example, coupled climate–carbon cycle models that represent land-use change simulate total land carbon stores at 2100 that vary by as much as 600 Pg C, given the same emissions scenario. This large uncertainty is associated with differences in how key processes are simulated in different models, and illustrates the necessity of determining which models are most realistic using rigorous methods of model evaluation. Here we assess the state-of-the-art in evaluation of ESMs, with a particular emphasis on the simulation of the carbon cycle and associated biospheric processes. We examine some of the new advances and remaining uncertainties relating to (i) modern and palaeodata and (ii) metrics for evaluation. We note that the practice of averaging results from many models is unreliable and no substitute for proper evaluation of individual models. We discuss a range of strategies, such as the inclusion of pre-calibration, combined process- and system-level evaluation, and the use of emergent constraints, that can contribute to the development of more robust evaluation schemes. An increasingly data-rich environment offers more opportunities for model evaluation, but also presents a challenge. Improved knowledge of data uncertainties is still necessary to move the field of ESM evaluation away from a "beauty contest" towards the development of useful constraints on model outcomes.
Resumo:
Five paired global climate model experiments, one with an ice pack that only responds thermodynamically (TI) and one including sea-ice dynamics (DI), were used to investigate the sensitivity of Arctic climates to sea-ice motion. The sequence of experiments includes situations in which the Arctic was both considerably colder (Glacial Inception, ca 115,000 years ago) and considerably warmer (3 × CO2) than today. Sea-ice motion produces cooler anomalies year-round than simulations without ice dynamics, resulting in reduced Arctic warming in warm scenarios and increased Arctic cooling in cold scenarios. These changes reflect changes in atmospheric circulation patterns: the DI simulations favor outflow of Arctic air and sea ice into the North Atlantic by promoting cyclonic circulation centered over northern Eurasia, whereas the TI simulations favor southerly inflow of much warmer air from the North Atlantic by promoting cyclonic circulation centered over Greenland. The differences between the paired simulations are sufficiently large to produce different vegetation cover over >19% of the land area north of 55°N, resulting in changes in land-surface characteristics large enough to have an additional impact on climate. Comparison of the DI and TI experiments for the mid-Holocene (6000 years ago) with paleovegetation reconstructions suggests the incorporation of sea-ice dynamics yields a more realistic simulation of high-latitude climates. The spatial pattern of sea-ice anomalies in the warmer-than-modern DI experiments strongly resembles the observed Arctic Ocean sea-ice dipole structure in recent decades, consistent with the idea that greenhouse warming is already impacting the high-northern latitudes.
Resumo:
Global syntheses of palaeoenvironmental data are required to test climate models under conditions different from the present. Data sets for this purpose contain data from spatially extensive networks of sites. The data are either directly comparable to model output or readily interpretable in terms of modelled climate variables. Data sets must contain sufficient documentation to distinguish between raw (primary) and interpreted (secondary, tertiary) data, to evaluate the assumptions involved in interpretation of the data, to exercise quality control, and to select data appropriate for specific goals. Four data bases for the Late Quaternary, documenting changes in lake levels since 30 kyr BP (the Global Lake Status Data Base), vegetation distribution at 18 kyr and 6 kyr BP (BIOME 6000), aeolian accumulation rates during the last glacial-interglacial cycle (DIRTMAP), and tropical terrestrial climates at the Last Glacial Maximum (the LGM Tropical Terrestrial Data Synthesis) are summarised. Each has been used to evaluate simulations of Last Glacial Maximum (LGM: 21 calendar kyr BP) and/or mid-Holocene (6 cal. kyr BP) environments. Comparisons have demonstrated that changes in radiative forcing and orography due to orbital and ice-sheet variations explain the first-order, broad-scale (in space and time) features of global climate change since the LGM. However, atmospheric models forced by 6 cal. kyr BP orbital changes with unchanged surface conditions fail to capture quantitative aspects of the observed climate, including the greatly increased magnitude and northward shift of the African monsoon during the early to mid-Holocene. Similarly, comparisons with palaeoenvironmental datasets show that atmospheric models have underestimated the magnitude of cooling and drying of much of the land surface at the LGM. The inclusion of feedbacks due to changes in ocean- and land-surface conditions at both times, and atmospheric dust loading at the LGM, appears to be required in order to produce a better simulation of these past climates. The development of Earth system models incorporating the dynamic interactions among ocean, atmosphere, and vegetation is therefore mandated by Quaternary science results as well as climatological principles. For greatest scientific benefit, this development must be paralleled by continued advances in palaeodata analysis and synthesis, which in turn will help to define questions that call for new focused data collection efforts.
Resumo:
Distributed generation plays a key role in reducing CO2 emissions and losses in transmission of power. However, due to the nature of renewable resources, distributed generation requires suitable control strategies to assure reliability and optimality for the grid. Multi-agent systems are perfect candidates for providing distributed control of distributed generation stations as well as providing reliability and flexibility for the grid integration. The proposed multi-agent energy management system consists of single-type agents who control one or more gird entities, which are represented as generic sub-agent elements. The agent applies one control algorithm across all elements and uses a cost function to evaluate the suitability of the element as a supplier. The behavior set by the agent's user defines which parameters of an element have greater weight in the cost function, which allows the user to specify the preference on suppliers dynamically. This study shows the ability of the multi-agent energy management system to select suppliers according to the selection behavior given by the user. The optimality of the supplier for the required demand is ensured by the cost function based on the parameters of the element.
Resumo:
Anthropogenic pressure influences the two-way interactions between shallow aquifers and coastal lagoons. Aquifer overexploitation may lead to seawater intrusion, and aquifer recharge from rainfall plus irrigation may, in turn, increase the groundwater discharge into the lagoon. We analyse the evolution, since the 1950s up to the present, of the interactions between the Campo de Cartagena Quaternary aquifer and the Mar Menor coastal lagoon (SE Spain). This is a very heterogeneous and anisotropic detrital aquifer, where aquifer–lagoon interface has a very irregular geometry. Using electrical resistivity tomography, we clearly identified the freshwater–saltwater transition zone and detected areas affected by seawater intrusion. Severity of the intrusion was spatially variable and significantly related to the density of irrigation wells in 1950s–1960s, suggesting the role of groundwater overexploitation. We distinguish two different mechanisms by which water from the sea invades the land: (a) horizontal advance of the interface due to a wide exploitation area and (b) vertical rise (upconing) caused by local intensive pumping. In general, shallow parts of the geophysical profiles show higher electrical resistivity associated with freshwater mainly coming from irrigation return flows, with water resources mostly from deep confined aquifers and imported from Tagus river, 400 km north. This indicates a likely reversal of the former seawater intrusion process.
Resumo:
We evaluate the ability of process based models to reproduce observed global mean sea-level change. When the models are forced by changes in natural and anthropogenic radiative forcing of the climate system and anthropogenic changes in land-water storage, the average of the modelled sea-level change for the periods 1900–2010, 1961–2010 and 1990–2010 is about 80%, 85% and 90% of the observed rise. The modelled rate of rise is over 1 mm yr−1 prior to 1950, decreases to less than 0.5 mm yr−1 in the 1960s, and increases to 3 mm yr−1 by 2000. When observed regional climate changes are used to drive a glacier model and an allowance is included for an ongoing adjustment of the ice sheets, the modelled sea-level rise is about 2 mm yr−1 prior to 1950, similar to the observations. The model results encompass the observed rise and the model average is within 20% of the observations, about 10% when the observed ice sheet contributions since 1993 are added, increasing confidence in future projections for the 21st century. The increased rate of rise since 1990 is not part of a natural cycle but a direct response to increased radiative forcing (both anthropogenic and natural), which will continue to grow with ongoing greenhouse gas emissions
Resumo:
There is a strong drive towards hyperresolution earth system models in order to resolve finer scales of motion in the atmosphere. The problem of obtaining more realistic representation of terrestrial fluxes of heat and water, however, is not just a problem of moving to hyperresolution grid scales. It is much more a question of a lack of knowledge about the parameterisation of processes at whatever grid scale is being used for a wider modelling problem. Hyperresolution grid scales cannot alone solve the problem of this hyperresolution ignorance. This paper discusses these issues in more detail with specific reference to land surface parameterisations and flood inundation models. The importance of making local hyperresolution model predictions available for evaluation by local stakeholders is stressed. It is expected that this will be a major driving force for improving model performance in the future. Keith BEVEN, Hannah CLOKE, Florian PAPPENBERGER, Rob LAMB, Neil HUNTER
Resumo:
Though many global aerosols models prognose surface deposition, only a few models have been used to directly simulate the radiative effect from black carbon (BC) deposition to snow and sea ice. Here, we apply aerosol deposition fields from 25 models contributing to two phases of the Aerosol Comparisons between Observations and Models (AeroCom) project to simulate and evaluate within-snow BC concentrations and radiative effect in the Arctic. We accomplish this by driving the offline land and sea ice components of the Community Earth System Model with different deposition fields and meteorological conditions from 2004 to 2009, during which an extensive field campaign of BC measurements in Arctic snow occurred. We find that models generally underestimate BC concentrations in snow in northern Russia and Norway, while overestimating BC amounts elsewhere in the Arctic. Although simulated BC distributions in snow are poorly correlated with measurements, mean values are reasonable. The multi-model mean (range) bias in BC concentrations, sampled over the same grid cells, snow depths, and months of measurements, are −4.4 (−13.2 to +10.7) ng g−1 for an earlier phase of AeroCom models (phase I), and +4.1 (−13.0 to +21.4) ng g−1 for a more recent phase of AeroCom models (phase II), compared to the observational mean of 19.2 ng g−1. Factors determining model BC concentrations in Arctic snow include Arctic BC emissions, transport of extra-Arctic aerosols, precipitation, deposition efficiency of aerosols within the Arctic, and meltwater removal of particles in snow. Sensitivity studies show that the model–measurement evaluation is only weakly affected by meltwater scavenging efficiency because most measurements were conducted in non-melting snow. The Arctic (60–90° N) atmospheric residence time for BC in phase II models ranges from 3.7 to 23.2 days, implying large inter-model variation in local BC deposition efficiency. Combined with the fact that most Arctic BC deposition originates from extra-Arctic emissions, these results suggest that aerosol removal processes are a leading source of variation in model performance. The multi-model mean (full range) of Arctic radiative effect from BC in snow is 0.15 (0.07–0.25) W m−2 and 0.18 (0.06–0.28) W m−2 in phase I and phase II models, respectively. After correcting for model biases relative to observed BC concentrations in different regions of the Arctic, we obtain a multi-model mean Arctic radiative effect of 0.17 W m−2 for the combined AeroCom ensembles. Finally, there is a high correlation between modeled BC concentrations sampled over the observational sites and the Arctic as a whole, indicating that the field campaign provided a reasonable sample of the Arctic.