102 resultados para Ecosystem-level models
Resumo:
Mathematical models devoted to different aspects of building studies and brought about a significant shift in the way we view buildings. From this background a new definition of building has emerged known as intelligent building that requires integration of a variety of computer-based complex systems. Research relevant to intelligent continues to grow at a much faster pace. This paper is a review of different mathematical models described in literature, which make use of different mathematical methodologies, and are intended for intelligent building studies without complex mathematical details. Models are discussed under a wide classification. Mathematical abstract level of the applied models is detailed and integrated with its literature. The goal of this paper is to present a comprehensive account of the achievements and status of mathematical models in intelligent building research. and to suggest future directions in models.
Resumo:
The presented study examined the opinion of in-service and prospective chemistry teachers about the importance of usage of molecular and crystal models in secondary-level school practice, and investigated some of the reasons for their (non-) usage. The majority of participants stated that the use of models plays an important role in chemistry education and that they would use them more often if the circumstances were more favourable. Many teachers claimed that three-dimensional (3d) models are still not available in sufficient number at their schools; they also pointed to the lack of available computer facilities during chemistry lessons. The research revealed that, besides the inadequate material circumstances, less than one third of participants are able to use simple (freeware) computer programs for drawing molecular structures and their presentation in virtual space; however both groups of teachers expressed the willingness to improve their knowledge in the subject area. The investigation points to several actions which could be undertaken to improve the current situation.
Resumo:
Current e-learning systems are increasing their importance in higher education. However, the state of the art of e-learning applications, besides the state of the practice, does not achieve the level of interactivity that current learning theories advocate. In this paper, the possibility of enhancing e-learning systems to achieve deep learning has been studied by replicating an experiment in which students had to learn basic software engineering principles. One group learned these principles using a static approach, while the other group learned the same principles using a system-dynamics-based approach, which provided interactivity and feedback. The results show that, quantitatively, the latter group achieved a better understanding of the principles; furthermore, qualitatively, they enjoyed the learning experience
Resumo:
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.
Resumo:
Assimilation of physical variables into coupled physical/biogeochemical models poses considerable difficulties. One problem is that data assimilation can break relationships between physical and biological variables. As a consequence, biological tracers, especially nutrients, are incorrectly displaced in the vertical, resulting in unrealistic biogeochemical fields. To prevent this, we present the idea of applying an increment to the nutrient field within a data assimilating model to ensure that nutrient-potential density relationships are maintained within a water column during assimilation. After correcting the nutrients, it is assumed that other biological variables rapidly adjust to the corrected nutrient fields. We applied this method to a 17 year run of the 2° NEMO ocean-ice model coupled to the PlankTOM5 ecosystem model. Results were compared with a control with no assimilation, and with a model with physical assimilation but no nutrient increment. In the nutrient incrementing experiment, phosphate distributions were improved both at high latitudes and at the equator. At midlatitudes, assimilation generated unrealistic advective upwelling of nutrients within the boundary currents, which spread into the subtropical gyres resulting in more biased nutrient fields. This result was largely unaffected by the nutrient increment and is probably due to boundary currents being poorly resolved in a 2° model. Changes to nutrient distributions fed through into other biological parameters altering primary production, air-sea CO2 flux, and chlorophyll distributions. These secondary changes were most pronounced in the subtropical gyres and at the equator, which are more nutrient limited than high latitudes.
Resumo:
The ability of four operational weather forecast models [ECMWF, Action de Recherche Petite Echelle Grande Echelle model (ARPEGE), Regional Atmospheric Climate Model (RACMO), and Met Office] to generate a cloud at the right location and time (the cloud frequency of occurrence) is assessed in the present paper using a two-year time series of observations collected by profiling ground-based active remote sensors (cloud radar and lidar) located at three different sites in western Europe (Cabauw. Netherlands; Chilbolton, United Kingdom; and Palaiseau, France). Particular attention is given to potential biases that may arise from instrumentation differences (especially sensitivity) from one site to another and intermittent sampling. In a second step the statistical properties of the cloud variables involved in most advanced cloud schemes of numerical weather forecast models (ice water content and cloud fraction) are characterized and compared with their counterparts in the models. The two years of observations are first considered as a whole in order to evaluate the accuracy of the statistical representation of the cloud variables in each model. It is shown that all models tend to produce too many high-level clouds, with too-high cloud fraction and ice water content. The midlevel and low-level cloud occurrence is also generally overestimated, with too-low cloud fraction but a correct ice water content. The dataset is then divided into seasons to evaluate the potential of the models to generate different cloud situations in response to different large-scale forcings. Strong variations in cloud occurrence are found in the observations from one season to the same season the following year as well as in the seasonal cycle. Overall, the model biases observed using the whole dataset are still found at seasonal scale, but the models generally manage to well reproduce the observed seasonal variations in cloud occurrence. Overall, models do not generate the same cloud fraction distributions and these distributions do not agree with the observations. Another general conclusion is that the use of continuous ground-based radar and lidar observations is definitely a powerful tool for evaluating model cloud schemes and for a responsive assessment of the benefit achieved by changing or tuning a model cloud
Resumo:
In addition to projected increases in global mean sea level over the 21st century, model simulations suggest there will also be changes in the regional distribution of sea level relative to the global mean. There is a considerable spread in the projected patterns of these changes by current models, as shown by the recent Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment (AR4). This spread has not reduced from that given by the Third Assessment models. Comparison with projections by ensembles of models based on a single structure supports an earlier suggestion that models of similar formulation give more similar patterns of sea level change. Analysing an AR4 ensemble of model projections under a business-as-usual scenario shows that steric changes (associated with subsurface ocean density changes) largely dominate the sea level pattern changes. The relative importance of subsurface temperature or salinity changes in contributing to this differs from region to region and, to an extent, from model-to-model. In general, thermosteric changes give the spatial variations in the Southern Ocean, halosteric changes dominate in the Arctic and strong compensation between thermosteric and halosteric changes characterises the Atlantic. The magnitude of sea level and component changes in the Atlantic appear to be linked to the amount of Atlantic meridional overturning circulation (MOC) weakening. When the MOC weakening is substantial, the Atlantic thermosteric patterns of change arise from a dominant role of ocean advective heat flux changes.
Resumo:
Sea-level rise is an important aspect of climate change because of its impact on society and ecosystems. Here we present an intercomparison of results from ten coupled atmosphere-ocean general circulation models (AOGCMs) for sea-level changes simulated for the twentieth century and projected to occur during the twenty first century in experiments following scenario IS92a for greenhouse gases and sulphate aerosols. The model results suggest that the rate of sea-level rise due to thermal expansion of sea water has increased during the twentieth century, but the small set of tide gauges with long records might not be adequate to detect this acceleration. The rate of sea-level rise due to thermal expansion continues to increase throughout the twenty first century, and the projected total is consequently larger than in the twentieth century; for 1990-2090 it amounts to 0.20-0.37 in. This wide range results from systematic uncertainty in modelling of climate change and of heat uptake by the ocean. The AOGCMs agree that sea-level rise is expected to be geographically non-uniform, with some regions experiencing as much as twice the global average, and others practically zero, but they do not agree about the geographical pattern. The lack of agreement indicates that we cannot currently have confidence in projections of local sea- level changes, and reveals a need for detailed analysis and intercomparison in order to understand and reduce the disagreements.
Resumo:
The present study investigates the initiation of precipitating deep convection in an ensemble of convection-resolving mesoscale models. Results of eight different model runs from five non-hydrostatic models are compared for a case of the Convective and Orographically-induced Precipitation Study (COPS). An isolated convective cell initiated east of the Black Forest crest in southwest Germany, although convective available potential energy was only moderate and convective inhibition was high. Measurements revealed that, due to the absence of synoptic forcing, convection was initiated by local processes related to the orography. In particular, the lifting by low-level convergence in the planetary boundary layer is assumed to be the dominant process on that day. The models used different configurations as well as different initial and boundary conditions. By comparing the different model performance with each other and with measurements, the processes which need to be well represented to initiate convection at the right place and time are discussed. Besides an accurate specification of the thermodynamic and kinematic fields, the results highlight the role of boundary-layer convergence features for quantitative precipitation forecasts in mountainous terrain.
Resumo:
Classical measures of network connectivity are the number of disjoint paths between a pair of nodes and the size of a minimum cut. For standard graphs, these measures can be computed efficiently using network flow techniques. However, in the Internet on the level of autonomous systems (ASs), referred to as AS-level Internet, routing policies impose restrictions on the paths that traffic can take in the network. These restrictions can be captured by the valley-free path model, which assumes a special directed graph model in which edge types represent relationships between ASs. We consider the adaptation of the classical connectivity measures to the valley-free path model, where it is -hard to compute them. Our first main contribution consists of presenting algorithms for the computation of disjoint paths, and minimum cuts, in the valley-free path model. These algorithms are useful for ASs that want to evaluate different options for selecting upstream providers to improve the robustness of their connection to the Internet. Our second main contribution is an experimental evaluation of our algorithms on four types of directed graph models of the AS-level Internet produced by different inference algorithms. Most importantly, the evaluation shows that our algorithms are able to compute optimal solutions to instances of realistic size of the connectivity problems in the valley-free path model in reasonable time. Furthermore, our experimental results provide information about the characteristics of the directed graph models of the AS-level Internet produced by different inference algorithms. It turns out that (i) we can quantify the difference between the undirected AS-level topology and the directed graph models with respect to fundamental connectivity measures, and (ii) the different inference algorithms yield topologies that are similar with respect to connectivity and are different with respect to the types of paths that exist between pairs of ASs.
Resumo:
In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.
Resumo:
There is intense scientific and public interest in the Intergovernmental Panel on Climate Change (IPCC) projections of sea level for the twenty-first century and beyond. The Fourth Assessment Report (AR4) projections, obtained by applying standard methods to the results of the World Climate Research Programme Coupled Model Experiment, includes estimates of ocean thermal expansion, the melting of glaciers and ice caps (G&ICs), increased melting of the Greenland Ice Sheet, and increased precipitation over Greenland and Antarctica, partially offsetting other contributions. The AR4 recognized the potential for a rapid dynamic ice sheet response but robust methods for quantifying it were not available. Illustrative scenarios suggested additional sea level rise on the order of 10 to 20 cm or more, giving a wide range in the global averaged projections of about 20 to 80 cm by 2100. Currently, sea level is rising at a rate near the upper end of these projections. Since publication of the AR4 in 2007, biases in historical ocean temperature observations have been identified and significantly reduced, resulting in improved estimates of ocean thermal expansion. Models that include all climate forcings are in good agreement with these improved observations and indicate the importance of stratospheric aerosol loadings from volcanic eruptions. Estimates of the volumes of G&ICs and their contributions to sea level rise have improved. Results from recent (but possibly incomplete) efforts to develop improved ice sheet models should be available for the 2013 IPCC projections. Improved understanding of sea level rise is paving the way for using observations to constrain projections. Understanding of the regional variations in sea level change as a result of changes in ocean properties, wind-stress patterns, and heat and freshwater inputs into the ocean is improving. Recently, estimates of sea level changes resulting from changes in Earth's gravitational field and the solid Earth response to changes in surface loading have been included in regional projections. While potentially valuable, semi-empirical models have important limitations, and their projections should be treated with caution
Resumo:
Tropical Cyclone (TC) is normally not studied at the individual level with Global Climate Models (GCMs), because the coarse grid spacing is often deemed insufficient for a realistic representation of the basic underlying processes. GCMs are indeed routinely deployed at low resolution, in order to enable sufficiently long integrations, which means that only large-scale TC proxies are diagnosed. A new class of GCMs is emerging, however, which is capable of simulating TC-type vortexes by retaining a horizontal resolution similar to that of operational NWP GCMs; their integration on the latest supercomputers enables the completion of long-term integrations. The UK-Japan Climate Collaboration and the UK-HiGEM projects have developed climate GCMs which can be run routinely for decades (with grid spacing of 60 km) or centuries (with grid spacing of 90 km); when coupled to the ocean GCM, a mesh of 1/3 degrees provides eddy-permitting resolution. The 90 km resolution model has been developed entirely by the UK-HiGEM consortium (together with its 1/3 degree ocean component); the 60 km atmospheric GCM has been developed by UJCC, in collaboration with the Met Office Hadley Centre.
Resumo:
Patterns of forest cover and forest degradation determine the size and types of ecosystem services forests provide. Particularly in low-income countries, nontimber forest product (NTFP) extraction by rural people, which provides important resources and income to the rural poor, contributes to the level and pattern of forest degradation. Although recent policy, particularly in Africa, emphasizes forest degradation, relatively little research describes the spatial aspects of NTFP collection that lead to spatial degradation patterns. This paper reviews both the spatial empirical work on NTFP extraction and related forest degradation patterns, and spatial models of behavior of rural people who extract NTFPs from forest. Despite the impact of rural people's behavior on resulting quantities and patterns of forest resources, spatial–temporal models/patterns rarely inform park siting and sizing decisions, econometric assessments of park effectiveness, development projects to support conservation, or REDD protocols. Using the literature review as a lens, we discuss the models' implications for these policies with particular emphasis on effective conservation spending and leakage.
Resumo:
Large-scale bottom-up estimates of terrestrial carbon fluxes, whether based on models or inventory, are highly dependent on the assumed land cover. Most current land cover and land cover change maps are based on satellite data and are likely to be so for the foreseeable future. However, these maps show large differences, both at the class level and when transformed into Plant Functional Types (PFTs), and these can lead to large differences in terrestrial CO2 fluxes estimated by Dynamic Vegetation Models. In this study the Sheffield Dynamic Global Vegetation Model is used. We compare PFT maps and the resulting fluxes arising from the use of widely available moderate (1 km) resolution satellite-derived land cover maps (the Global Land Cover 2000 and several MODIS classification schemes), with fluxes calculated using a reference high (25 m) resolution land cover map specific to Great Britain (the Land Cover Map 2000). We demonstrate that uncertainty is introduced into carbon flux calculations by (1) incorrect or uncertain assignment of land cover classes to PFTs; (2) information loss at coarser resolutions; (3) difficulty in discriminating some vegetation types from satellite data. When averaged over Great Britain, modeled CO2 fluxes derived using the different 1 km resolution maps differ from estimates made using the reference map. The ranges of these differences are 254 gC m−2 a−1 in Gross Primary Production (GPP); 133 gC m−2 a−1 in Net Primary Production (NPP); and 43 gC m−2 a−1 in Net Ecosystem Production (NEP). In GPP this accounts for differences of −15.8% to 8.8%. Results for living biomass exhibit a range of 1109 gC m−2. The types of uncertainties due to land cover confusion are likely to be representative of many parts of the world, especially heterogeneous landscapes such as those found in western Europe.