953 resultados para Models and Principles
Resumo:
Background and Aims Forest trees directly contribute to carbon cycling in forest soils through the turnover of their fine roots. In this study we aimed to calculate root turnover rates of common European forest tree species and to compare them with most frequently published values. Methods We compiled available European data and applied various turnover rate calculation methods to the resulting database. We used Decision Matrix and Maximum-Minimum formula as suggested in the literature. Results Mean turnover rates obtained by the combination of sequential coring and Decision Matrix were 0.86 yr−1 for Fagus sylvatica and 0.88 yr−1 for Picea abies when maximum biomass data were used for the calculation, and 1.11 yr−1 for both species when mean biomass data were used. Using mean biomass rather than maximum resulted in about 30 % higher values of root turnover. Using the Decision Matrix to calculate turnover rate doubled the rates when compared to the Maximum-Minimum formula. The Decision Matrix, however, makes use of more input information than the Maximum-Minimum formula. Conclusions We propose that calculations using the Decision Matrix with mean biomass give the most reliable estimates of root turnover rates in European forests and should preferentially be used in models and C reporting.
Landscape, regional and global estimates of nitrogen flux from land to sea: errors and uncertainties
Resumo:
Regional to global scale modelling of N flux from land to ocean has progressed to date through the development of simple empirical models representing bulk N flux rates from large watersheds, regions, or continents on the basis of a limited selection of model parameters. Watershed scale N flux modelling has developed a range of physically-based approaches ranging from models where N flux rates are predicted through a physical representation of the processes involved, through to catchment scale models which provide a simplified representation of true systems behaviour. Generally, these watershed scale models describe within their structure the dominant process controls on N flux at the catchment or watershed scale, and take into account variations in the extent to which these processes control N flux rates as a function of landscape sensitivity to N cycling and export. This paper addresses the nature of the errors and uncertainties inherent in existing regional to global scale models, and the nature of error propagation associated with upscaling from small catchment to regional scale through a suite of spatial aggregation and conceptual lumping experiments conducted on a validated watershed scale model, the export coefficient model. Results from the analysis support the findings of other researchers developing macroscale models in allied research fields. Conclusions from the study confirm that reliable and accurate regional scale N flux modelling needs to take account of the heterogeneity of landscapes and the impact that this has on N cycling processes within homogenous landscape units.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The World Weather Research Programme (WWRP) and the World Climate Research Programme (WCRP) have identified collaborations and scientific priorities to accelerate advances in analysis and prediction at subseasonal-to-seasonal time scales, which include i) advancing knowledge of mesoscale–planetary-scale interactions and their prediction; ii) developing high-resolution global–regional climate simulations, with advanced representation of physical processes, to improve the predictive skill of subseasonal and seasonal variability of high-impact events, such as seasonal droughts and floods, blocking, and tropical and extratropical cyclones; iii) contributing to the improvement of data assimilation methods for monitoring and predicting used in coupled ocean–atmosphere–land and Earth system models; and iv) developing and transferring diagnostic and prognostic information tailored to socioeconomic decision making. The document puts forward specific underpinning research, linkage, and requirements necessary to achieve the goals of the proposed collaboration.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
Human-made transformations to the environment, and in particular the land surface, are having a large impact on the distribution (in both time and space) of rainfall, upon which all life is reliant. Focusing on precipitation, soil moisture and near-surface temperature, we compare data from Phase 5 of the Climate Modelling Intercomparison Project (CMIP5), as well as blended observational–satellite data, to see how the interaction between rainfall and the land surface differs (or agrees) between the models and reality, at daily timescales. As expected, the results suggest a strong positive relationship between precipitation and soil moisture when precipitation leads and is concurrent with soil moisture estimates, for the tropics as a whole. Conversely a negative relationship is shown when soil moisture leads rainfall by a day or more. A weak positive relationship between precipitation and temperature is shown when either leads by one day, whereas a weak negative relationship is shown over the same time period between soil moisture and temperature. Temporally, in terms of lag and lead relationships, the models appear to be in agreement on the overall patterns of correlation between rainfall and soil moisture. However, in terms of spatial patterns, a comparison of these relationships across all available models reveals considerable variability in the ability of the models to reproduce the correlations between precipitation and soil moisture. There is also a difference in the timings of the correlations, with some models showing the highest positive correlations when precipitation leads soil moisture by one day. Finally, the results suggest that there are 'hotspots' of high linear gradients between precipitation and soil moisture, corresponding to regions experiencing heavy rainfall. These results point to an inability of the CMIP5 models to simulate a positive feedback between soil moisture and precipitation at daily timescales. Longer timescale comparisons and experiments at higher spatial resolutions, where the impact of the spatial heterogeneity of rainfall on the initiation of convection and supply of moisture is included, would be expected to improve process understanding further.
Resumo:
This paper investigates the application and use of development viability models in the formation of planning policies in the UK. Particular attention is paid to three key areas; the assumed development scheme in development viability models, the use of forecasts and the debate concerning Threshold Land Value. The empirical section reports on the results of an interview survey involving the main producers of development viability models and appraisals. It is concluded that, although development viability models have intrinsic limitations associated with model composition and input uncertainties, the most significant limitations are related to the ways that they have been adapted for use in the planning system. In addition, it is suggested that the contested nature of Threshold Land Value is an example of calculative practices providing a façade of technocratic rationality in the planning system.
Resumo:
Alverata: a typeface design for Europe This typeface is a response to the extraordinarily diverse forms of letters of the Latin alphabet in manuscripts and inscriptions in the Romanesque period (c. 1000–1200). While the Romanesque did provide inspiration for architectural lettering in the nineteenth century, these letterforms have not until now been systematically considered and redrawn as a working typeface. The defining characteristic of the Romanesque letterform is variety: within an individual inscription or written text, letters such as A, C, E and G might appear with different forms at each appearance. Some of these forms relate to earlier Roman inscriptional forms and are therefore familiar to us, but others are highly geometric and resemble insular and uncial forms. The research underlying the typeface involved the collection of a large number of references for lettering of this period, from library research and direct on-site ivestigation. This investigation traced the wide dispersal of the Romanesque lettering tradition across the whole of Europe. The variety of letter widths and weights encountered, as well as variant shapes for individual letters, offered both direct models and stylistic inspiration for the characters and for the widths and weight variants of the typeface. The ability of the OpenType format to handle multiple stylistic variants of any one character has been exploited to reflect the multiplicity of forms available to stonecutters and scribes of the period. To make a typeface that functions in a contemporary environment, a lower case has been added, and formal and informal variants supported. The pan-European nature of the Romanesque design tradition has inspired an pan-European approach to the character set of the typeface, allowing for text composition in all European languages, and the typeface has been extended into Greek and Cyrillic, so that the broadest representation of European languages can be achieved.
Resumo:
It is becoming increasingly important to be able to verify the spatial accuracy of precipitation forecasts, especially with the advent of high-resolution numerical weather prediction (NWP) models. In this article, the fractions skill score (FSS) approach has been used to perform a scale-selective evaluation of precipitation forecasts during 2003 from the Met Office mesoscale model (12 km grid length). The investigation shows how skill varies with spatial scale, the scales over which the data assimilation (DA) adds most skill, and how the loss of that skill is dependent on both the spatial scale and the rainfall coverage being examined. Although these results come from a specific model, they demonstrate how this verification approach can provide a quantitative assessment of the spatial behaviour of new finer-resolution models and DA techniques.
Resumo:
The performance of 18 coupled Chemistry Climate Models (CCMs) in the Tropical Tropopause Layer (TTL) is evaluated using qualitative and quantitative diagnostics. Trends in tropopause quantities in the tropics and the extratropical Upper Troposphere and Lower Stratosphere (UTLS) are analyzed. A quantitative grading methodology for evaluating CCMs is extended to include variability and used to develop four different grades for tropical tropopause temperature and pressure, water vapor and ozone. Four of the 18 models and the multi-model mean meet quantitative and qualitative standards for reproducing key processes in the TTL. Several diagnostics are performed on a subset of the models analyzing the Tropopause Inversion Layer (TIL), Lagrangian cold point and TTL transit time. Historical decreases in tropical tropopause pressure and decreases in water vapor are simulated, lending confidence to future projections. The models simulate continued decreases in tropopause pressure in the 21st century, along with ∼1K increases per century in cold point tropopause temperature and 0.5–1 ppmv per century increases in water vapor above the tropical tropopause. TTL water vapor increases below the cold point. In two models, these trends are associated with 35% increases in TTL cloud fraction. These changes indicate significant perturbations to TTL processes, specifically to deep convective heating and humidity transport. Ozone in the extratropical lowermost stratosphere has significant and hemispheric asymmetric trends. O3 is projected to increase by nearly 30% due to ozone recovery in the Southern Hemisphere (SH) and due to enhancements in the stratospheric circulation. These UTLS ozone trends may have significant effects in the TTL and the troposphere.
Resumo:
The response of stratospheric climate and circulation to increasing amounts of greenhouse gases (GHGs) and ozone recovery in the twenty-first century is analyzed in simulations of 11 chemistry–climate models using near-identical forcings and experimental setup. In addition to an overall global cooling of the stratosphere in the simulations (0.59 6 0.07 K decade21 at 10 hPa), ozone recovery causes a warming of the Southern Hemisphere polar lower stratosphere in summer with enhanced cooling above. The rate of warming correlates with the rate of ozone recovery projected by the models and, on average, changes from 0.8 to 0.48 Kdecade21 at 100 hPa as the rate of recovery declines from the first to the second half of the century. In the winter northern polar lower stratosphere the increased radiative cooling from the growing abundance of GHGs is, in most models, balanced by adiabatic warming from stronger polar downwelling. In the Antarctic lower stratosphere the models simulate an increase in low temperature extremes required for polar stratospheric cloud (PSC) formation, but the positive trend is decreasing over the twenty-first century in all models. In the Arctic, none of the models simulates a statistically significant increase in Arctic PSCs throughout the twenty-first century. The subtropical jets accelerate in response to climate change and the ozone recovery produces awestward acceleration of the lower-stratosphericwind over theAntarctic during summer, though this response is sensitive to the rate of recovery projected by the models. There is a strengthening of the Brewer–Dobson circulation throughout the depth of the stratosphere, which reduces the mean age of air nearly everywhere at a rate of about 0.05 yr decade21 in those models with this diagnostic. On average, the annual mean tropical upwelling in the lower stratosphere (;70 hPa) increases by almost 2% decade21, with 59% of this trend forced by the parameterized orographic gravity wave drag in the models. This is a consequence of the eastward acceleration of the subtropical jets, which increases the upward flux of (parameterized) momentum reaching the lower stratosphere in these latitudes.
Resumo:
The quasi-biennial oscillation (QBO) in the equatorial zonal wind is an outstanding phenomenon of the atmosphere. The QBO is driven by a broad spectrum of waves excited in the tropical troposphere and modulates transport and mixing of chemical compounds in the whole middle atmosphere. Therefore, the simulation of the QBO in general circulation models and chemistry climate models is an important issue. Here, aspects of the climatology and forcing of a spontaneously occurring QBO in a middle-atmosphere model are evaluated, and its influence on the climate and variability of the tropical middle atmosphere is investigated. Westerly and easterly phases are considered separately, and 40-yr ECMWF Re-Analysis (ERA-40) data are used as a reference where appropriate. It is found that the simulated QBO is realistic in many details. Resolved large-scale waves are particularly important for the westerly phase, while parameterized gravity wave drag is more important for the easterly phase. Advective zonal wind tendencies are important for asymmetries between westerly and easterly phases, as found for the suppression of the easterly phase downward propagation. The simulation of the QBO improves the tropical upwelling and the atmospheric tape recorder compared to a model without a QBO. The semiannual oscillation is simulated realistically only if the QBO is represented. In sensitivity tests, it is found that the simulated QBO is strongly sensitive to changes in the gravity wave sources. The sensitivity to the tested range of horizontal resolutions is small. The stratospheric vertical resolution must be better than 1 km to simulate a realistic QBO.
Resumo:
Koppen climate classification was applied to the output of atmospheric general circulation models and coupled atmosphere-ocean circulation models. The classification was used to validate model control runs of the present climate and to analyse greenhouse gas warming simulations The most prominent results of the global warming con~putationsw ere a retreat of regions of permafrost and the increase of areas with tropical rainy climates and dry climates.
Resumo:
We discuss the modeling of dielectric responses of electromagnetically excited networks which are composed of a mixture of capacitors and resistors. Such networks can be employed as lumped-parameter circuits to model the response of composite materials containing conductive and insulating grains. The dynamics of the excited network systems are studied using a state space model derived from a randomized incidence matrix. Time and frequency domain responses from synthetic data sets generated from state space models are analyzed for the purpose of estimating the fraction of capacitors in the network. Good results were obtained by using either the time-domain response to a pulse excitation or impedance data at selected frequencies. A chemometric framework based on a Successive Projections Algorithm (SPA) enables the construction of multiple linear regression (MLR) models which can efficiently determine the ratio of conductive to insulating components in composite material samples. The proposed method avoids restrictions commonly associated with Archie’s law, the application of percolation theory or Kohlrausch-Williams-Watts models and is applicable to experimental results generated by either time domain transient spectrometers or continuous-wave instruments. Furthermore, it is quite generic and applicable to tomography, acoustics as well as other spectroscopies such as nuclear magnetic resonance, electron paramagnetic resonance and, therefore, should be of general interest across the dielectrics community.