998 resultados para Variability Modeling
Resumo:
The main biogeochemical nutrient distributions, along with ambient ocean temperature and the light field, control ocean biological productivity. Observations of nutrients are much sparser than physical observations of temperature and salinity, yet it is critical to validate biogeochemical models against these sparse observations if we are to successfully model biological variability and trends. Here we use data from the Bermuda Atlantic Time-series Study and the World Ocean Database 2005 to demonstrate quantitatively that over the entire globe a significant fraction of the temporal variability of phosphate, silicate and nitrate within the oceans is correlated with water density. The temporal variability of these nutrients as a function of depth is almost always greater than as a function of potential density, with he largest reductions in variability found within the main pycnocline. The greater nutrient variability as a function of depth occurs when dynamical processes vertically displace nutrient and density fields together on shorter timescales than biological adjustments. These results show that dynamical processes can have a significant impact on the instantaneous nutrient distributions. These processes must therefore be considered when modeling biogeochemical systems, when comparing such models with observations, or when assimilating data into such models.
Resumo:
A modeling Study was carried out into pea-barley intercropping in northern Europe. The two objectives were (a) to compare pea-barley intercropping to sole cropping in terms of grain and nitrogen yield amounts and stability, and (b) to explore options for managing pea-barley intercropping systems in order to maximize the biomass produced and the grain and nitrogen yields according to the available resources, such as light, water and nitrogen. The study consisted of simulations taking into account soil and weather variability among three sites located in northern European Countries (Denmark, United Kingdom and France), and using 10 years of weather records. A preliminary stage evaluated the STICS intercrop model's ability to predict grain and nitrogen yields of the two species, using a 2-year dataset from trials conducted at the three sites. The work was carried out in two phases, (a) the model was run to investigate the potentialities of intercrops as compared to sole crops, and (b) the model was run to explore options for managing pea-barley intercropping, asking the following three questions: (i) in order to increase light capture, Would it be worth delaying the sowing dates of one species? (ii) How to manage sowing density and seed proportion of each species in the intercrop to improve total grain yield and N use efficiency? (iii) How to optimize the use of nitrogen resources by choosing the most suitable preceding crop and/or the most appropriate soil? It was found that (1) intercropping made better use of environmental resources as regards yield amount and stability than sole cropping, with a noticeable site effect, (2) pea growth in intercrops was strongly linked to soil moisture, and barley yield was determined by nitrogen uptake and light interception due to its height relative to pea, (3) sowing barley before pea led to a relative grain yield reduction averaged over all three sites, but sowing strategy must be adapted to the location, being dependent on temperature and thus latitude, (4) density and species proportions had a small effect on total grain yield, underlining the interspecific offset in the use of environmental growth resources which led to similar total grain yields whatever the pea-barley design, and (5) long-term strategies including mineralization management through organic residue supply and rotation management were very valuable, always favoring intercrop total grain yield and N accumulation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The performance benefit when using Grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effect of the synchronization overhead, mainly due to the high variability of completion times of the different tasks, which, in turn, is due to the large heterogeneity of Grid nodes. For this reason, it is important to have models which capture the performance of such systems. In this paper we describe a queueing-network-based performance model able to accurately analyze Grid architectures, and we use the model to study a real parallel application executed in a Grid. The proposed model improves the classical modelling techniques and highlights the impact of resource heterogeneity and network latency on the application performance.
Resumo:
The performance benefit when using grid systems comes from different strategies, among which partitioning the applications into parallel tasks is the most important. However, in most cases the enhancement coming from partitioning is smoothed by the effects of synchronization overheads, mainly due to the high variability in the execution times of the different tasks, which, in turn, is accentuated by the large heterogeneity of grid nodes. In this paper we design hierarchical, queuing network performance models able to accurately analyze grid architectures and applications. Thanks to the model results, we introduce a new allocation policy based on a combination between task partitioning and task replication. The models are used to study two real applications and to evaluate the performance benefits obtained with allocation policies based on task replication.
Resumo:
The soil−air−plant pathway is potentially important in the vegetative accumulation of organic pollutants from contaminated soils. While a number of qualitative frameworks exist for the prediction of plant accumulation of organic chemicals by this pathway, there are few quantitative models that incorporate this pathway. The aim of the present study was to produce a model that included this pathway and could quantify its contribution to the total plant contamination for a range of organic pollutants. A new model was developed from three submodels for the processes controlling plant contamination via this pathway: aerial deposition, soil volatilization, and systemic translocation. Using the combined model, the soil−air−plant pathway was predicted to account for a significant proportion of the total shoot contamination for those compounds with log KOA > 9 and log KAW < −3. For those pollutants with log KOA < 9 and log KAW > −3 there was a higher deposition of pollutant via the soil−air−plant pathway than for those chemicals with log KOA > 9 and log KAW < −3, but this was an insignificant proportion of the total shoot contamination because of the higher mobility of these compounds via the soil−root−shoot pathway. The incorporation of the soil−air−plant pathway into the plant uptake model did not significantly improve the prediction of the contamination of vegetation from polluted soils when compared across a range of studies. This was a result of the high variability between the experimental studies where the bioconcentration factors varied by 2 orders of magnitude at an equivalent log KOA. One potential reason for this is the background air concentration of the pollutants under study. It was found background air concentrations would dominate those from soil volatilization in many situations unless there was a soil hot spot of contamination, i.e., >100 mg kg−1.
Resumo:
It took the solar polar passage of Ulysses in the early 1990s to establish the global structure of the solar wind speed during solar minimum. However, it remains unclear if the solar wind is composed of two distinct populations of solar wind from different sources (e.g., closed loops which open up to produce the slow solar wind) or if the fast and slow solar wind rely on the superradial expansion of the magnetic field to account for the observed solar wind speed variation. We investigate the solar wind in the inner corona using the Wang-Sheeley-Arge (WSA) coronal model incorporating a new empirical magnetic topology–velocity relationship calibrated for use at 0.1 AU. In this study the empirical solar wind speed relationship was determined by using Helios perihelion observations, along with results from Riley et al. (2003) and Schwadron et al. (2005) as constraints. The new relationship was tested by using it to drive the ENLIL 3-D MHD solar wind model and obtain solar wind parameters at Earth (1.0 AU) and Ulysses (1.4 AU). The improvements in speed, its variability, and the occurrence of high-speed enhancements provide confidence that the new velocity relationship better determines the solar wind speed in the outer corona (0.1 AU). An analysis of this improved velocity field within the WSA model suggests the existence of two distinct mechanisms of the solar wind generation, one for fast and one for slow solar wind, implying that a combination of present theories may be necessary to explain solar wind observations.
Resumo:
This paper examines the interaction of spatial and dynamic aspects of resource extraction from forests by local people. Highly cyclical and varied across space and time, the patterns of resource extraction resulting from the spatial–temporal model bear little resemblance to the patterns drawn from focusing either on spatial or temporal aspects of extraction alone. Ignoring this variability inaccurately depicts villagers’ dependence on different parts of the forest and could result in inappropriate policies. Similarly, the spatial links in extraction decisions imply that policies imposed in one area can have unintended consequences in other areas. Combining the spatial–temporal model with a measure of success in community forest management—the ability to avoid open-access resource degradation—characterizes the impact of incomplete property rights on patterns of resource extraction and stocks.
Assessing and understanding the impact of stratospheric dynamics and variability on the earth system
Resumo:
Advances in weather and climate research have demonstrated the role of the stratosphere in the Earth system across a wide range of temporal and spatial scales. Stratospheric ozone loss has been identified as a key driver of Southern Hemisphere tropospheric circulation trends, affecting ocean currents and carbon uptake, sea ice, and possibly even the Antarctic ice sheets. Stratospheric variability has also been shown to affect short term and seasonal forecasts, connecting the tropics and midlatitudes and guiding storm track dynamics. The two-way interactions between the stratosphere and the Earth system have motivated the World Climate Research Programme's (WCRP) Stratospheric Processes and Their Role in Climate (SPARC) DynVar activity to investigate the impact of stratospheric dynamics and variability on climate. This assessment will be made possible by two new multi-model datasets. First, roughly 10 models with a well resolved stratosphere are participating in the Coupled Model Intercomparison Project 5 (CMIP5), providing the first multi-model ensemble of climate simulations coupled from the stratopause to the sea floor. Second, the Stratosphere Historical Forecasting Project (SHFP) of WCRP's Climate Variability and predictability (CLIVAR) program is forming a multi-model set of seasonal hindcasts with stratosphere resolving models, revealing the impact of both stratospheric initial conditions and dynamics on intraseasonal prediction. The CMIP5 and SHFP model-data sets will offer an unprecedented opportunity to understand the role of the stratosphere in the natural and forced variability of the Earth system and to determine whether incorporating knowledge of the middle atmosphere improves seasonal forecasts and climate projections. Capsule New modeling efforts will provide unprecedented opportunities to harness our knowledge of the stratosphere to improve weather and climate prediction.
Resumo:
Glacier fluctuations exclusively due to internal variations in the climate system are simulated using downscaled integrations of the ECHAM4/OPYC coupled general circulation model (GCM). A process-based modeling approach using a mass balance model of intermediate complexity and a dynamic ice flow model considering simple shearing flow and sliding are applied. Multimillennia records of glacier length fluctuations for Nigardsbreen (Norway) and Rhonegletscher (Switzerland) are simulated using autoregressive processes determined by statistically downscaled GCM experiments. Return periods and probabilities of specific glacier length changes using GCM integrations excluding external forcings such as solar irradiation changes, volcanic, or anthropogenic effects are analyzed and compared to historical glacier length records. Preindustrial fluctuations of the glaciers as far as observed or reconstructed, including their advance during the “Little Ice Age,” can be explained by internal variability in the climate system as represented by a GCM. However, fluctuations comparable to the present-day glacier retreat exceed any variation simulated by the GCM control experiments and must be caused by external forcing, with anthropogenic forcing being a likely candidate.
Resumo:
In this study, we examine seasonal and geographical variability of marine aerosol fine-mode fraction ( fm) and its impacts on deriving the anthropogenic component of aerosol optical depth (ta) and direct radiative forcing from multispectral satellite measurements. A proxy of fm, empirically derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5 data, shows large seasonal and geographical variations that are consistent with the Goddard Chemistry Aerosol Radiation Transport (GOCART) and Global Modeling Initiative (GMI) model simulations. The so-derived seasonally and spatially varying fm is then implemented into a method of estimating ta and direct radiative forcing from the MODIS measurements. It is found that the use of a constant value for fm as in previous studies would have overestimated ta by about 20% over global ocean, with the overestimation up to �45% in some regions and seasons. The 7-year (2001–2007) global ocean average ta is 0.035, with yearly average ranging from 0.031 to 0.039. Future improvement in measurements is needed to better separate anthropogenic aerosol from natural ones and to narrow down the wide range of aerosol direct radiative forcing.
Resumo:
The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent social, economic and technical actors, which may be defi ned at various levels of abstraction. It is applied to understanding their interactions and can be adapted to include learning processes and emergent patterns. CASCADE models ‘prosumer’ agents (i.e., producers and/or consumers of energy) and ‘aggregator’ agents (e.g., traders of energy in both wholesale and retail markets) at various scales, from large generators and Energy Service Companies down to individual people and devices. The CASCADE Framework is formed of three main subdivisions that link models of electricity supply and demand, the electricity market and power fl ow. It can also model the variability of renewable energy generation caused by the weather, which is an important issue for grid balancing and the profi tability of energy suppliers. The development of CASCADE has already yielded some interesting early fi ndings, demonstrating that it is possible for a mediating agent (aggregator) to achieve stable demandfl attening across groups of domestic households fi tted with smart energy control and communication devices, where direct wholesale price signals had previously been found to produce characteristic complex system instability. In another example, it has demonstrated how large changes in supply mix can be caused even by small changes in demand profi le. Ongoing and planned refi nements to the Framework will support investigation of demand response at various scales, the integration of the power sector with transport and heat sectors, novel technology adoption and diffusion work, evolution of new smart grid business models, and complex power grid engineering and market interactions.
Resumo:
Seventeen simulations of the Last Glacial Maximum (LGM) climate have been performed using atmospheric general circulation models (AGCM) in the framework of the Paleoclimate Modeling Intercomparison Project (PMIP). These simulations use the boundary conditions for CO2, insolation and ice-sheets; surface temperatures (SSTs) are either (a) prescribed using CLIMAP data set (eight models) or (b) computed by coupling the AGCM with a slab ocean (nine models). The present-day (PD) tropical climate is correctly depicted by all the models, except the coarser resolution models, and the simulated geographical distribution of annual mean temperature is in good agreement with climatology. Tropical cooling at the LGM is less than at middle and high latitudes, but greatly exceeds the PD temperature variability. The LGM simulations with prescribed SSTs underestimate the observed temperature changes except over equatorial Africa where the models produce a temperature decrease consistent with the data. Our results confirm previous analyses showing that CLIMAP (1981) SSTs only produce a weak terrestrial cooling. When SSTs are computed, the models depict a cooling over the Pacific and Indian oceans in contrast with CLIMAP and most models produce cooler temperatures over land. Moreover four of the nine simulations, produce a cooling in good agreement with terrestrial data. Two of these model results over ocean are consistent with new SST reconstructions whereas two models simulate a homogeneous cooling. Finally, the LGM aridity inferred for most of the tropics from the data, is globally reproduced by the models with a strong underestimation for models using computed SSTs.
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.