60 resultados para Monotonic interpolation
Resumo:
In this paper we have proposed and analyzed a simple mathematical model consisting of four variables, viz., nutrient concentration, toxin producing phytoplankton (TPP), non-toxic phytoplankton (NTP), and toxin concentration. Limitation in the concentration of the extracellular nutrient has been incorporated as an environmental stress condition for the plankton population, and the liberation of toxic chemicals has been described by a monotonic function of extracellular nutrient. The model is analyzed and simulated to reproduce the experimental findings of Graneli and Johansson [Graneli, E., Johansson, N., 2003. Increase in the production of allelopathic Prymnesium parvum cells grown under N- or P-deficient conditions. Harmful Algae 2, 135–145]. The robustness of the numerical experiments are tested by a formal parameter sensitivity analysis. As the first theoretical model consistent with the experiment of Graneli and Johansson (2003), our results demonstrate that, when nutrient-deficient conditions are favorable for the TPP population to release toxic chemicals, the TPP species control the bloom of other phytoplankton species which are non-toxic. Consistent with the observations made by Graneli and Johansson (2003), our model overcomes the limitation of not incorporating the effect of nutrient-limited toxic production in several other models developed on plankton dynamics.
Resumo:
We explore the large spatial variation in the relationship between population density and burned area, using continental-scale Geographically Weighted Regression (GWR) based on 13 years of satellite-derived burned area maps from the global fire emissions database (GFED) and the human population density from the gridded population of the world (GPW 2005). Significant relationships are observed over 51.5% of the global land area, and the area affected varies from continent to continent: population density has a significant impact on fire over most of Asia and Africa but is important in explaining fire over < 22% of Europe and Australia. Increasing population density is associated with both increased and decreased in fire. The nature of the relationship depends on land-use: increasing population density is associated with increased burned are in rangelands but with decreased burned area in croplands. Overall, the relationship between population density and burned area is non-monotonic: burned area initially increases with population density and then decreases when population density exceeds a threshold. These thresholds vary regionally. Our study contributes to improved understanding of how human activities relate to burned area, and should contribute to a better estimate of atmospheric emissions from biomass burning.
Resumo:
Although there is a strong policy interest in the impacts of climate change corresponding to different degrees of climate change, there is so far little consistent empirical evidence of the relationship between climate forcing and impact. This is because the vast majority of impact assessments use emissions-based scenarios with associated socio-economic assumptions, and it is not feasible to infer impacts at other temperature changes by interpolation. This paper presents an assessment of the global-scale impacts of climate change in 2050 corresponding to defined increases in global mean temperature, using spatially-explicit impacts models representing impacts in the water resources, river flooding, coastal, agriculture, ecosystem and built environment sectors. Pattern-scaling is used to construct climate scenarios associated with specific changes in global mean surface temperature, and a relationship between temperature and sea level used to construct sea level rise scenarios. Climate scenarios are constructed from 21 climate models to give an indication of the uncertainty between forcing and response. The analysis shows that there is considerable uncertainty in the impacts associated with a given increase in global mean temperature, due largely to uncertainty in the projected regional change in precipitation. This has important policy implications. There is evidence for some sectors of a non-linear relationship between global mean temperature change and impact, due to the changing relative importance of temperature and precipitation change. In the socio-economic sectors considered here, the relationships are reasonably consistent between socio-economic scenarios if impacts are expressed in proportional terms, but there can be large differences in absolute terms. There are a number of caveats with the approach, including the use of pattern-scaling to construct scenarios, the use of one impacts model per sector, and the sensitivity of the shape of the relationships between forcing and response to the definition of the impact indicator.
Resumo:
Time series of global and regional mean Surface Air Temperature (SAT) anomalies are a common metric used to estimate recent climate change. Various techniques can be used to create these time series from meteorological station data. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques relative to the reanalysis reference. Kriging techniques provided the smallest errors in estimates of Arctic anomalies and Simple Kriging was often the best kriging method in this study, especially over sea ice. A linear interpolation technique had, on average, Root Mean Square Errors (RMSEs) up to 0.55 K larger than the two kriging techniques tested. Non-interpolating techniques provided the least representative anomaly estimates. Nonetheless, they serve as useful checks for confirming whether estimates from interpolating techniques are reasonable. The interaction of meteorological station coverage with estimation techniques between 1850 and 2011 was simulated using an ensemble dataset comprising repeated individual years (1979-2011). All techniques were found to have larger RMSEs for earlier station coverages. This supports calls for increased data sharing and data rescue, especially in sparsely observed regions such as the Arctic.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
A procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) is proposed for operational rainfall estimation using rain gauges and radar data. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on Barnes' objective analysis scheme (OAS), whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, a spatially variable adjustment with multiplicative factors, and ordinary cokriging.
Resumo:
The East China Sea is a hot area for typhoon waves to occur. A wave spectra assimilation model has been developed to predict the typhoon wave more accurately and operationally. This is the first time where wave data from Taiwan have been used to predict typhoon wave along the mainland China coast. The two-dimensional spectra observed in Taiwan northeast coast modify the wave field output by SWAN model through the technology of optimal interpolation (OI) scheme. The wind field correction is not involved as it contributes less than a quarter of the correction achieved by assimilation of waves. The initialization issue for assimilation is discussed. A linear evolution law for noise in the wave field is derived from the SWAN governing equations. A two-dimensional digital low-pass filter is used to obtain the initialized wave fields. The data assimilation model is optimized during the typhoon Sinlaku. During typhoons Krosa and Morakot, data assimilation significantly improves the low frequency wave energy and wave propagation direction in Taiwan coast. For the far-field region, the assimilation model shows an expected ability of improving typhoon wave forecast as well, as data assimilation enhances the low frequency wave energy. The proportion of positive assimilation indexes is over 81% for all the periods of comparison. The paper also finds that the impact of data assimilation on the far-field region depends on the state of the typhoon developing and the swell propagation direction.
Resumo:
Palaeoclimates across Europe for 6000 y BP were estimated from pollen data using the modern pollen analogue technique constrained with lake-level data. The constraint consists of restricting the set of modern pollen samples considered as analogues of the fossil samples to those locations where the implied change in annual precipitation minus evapotranspiration (P–E) is consistent with the regional change in moisture balance as indicated by lakes. An artificial neural network was used for the spatial interpolation of lake-level changes to the pollen sites, and for mapping palaeoclimate anomalies. The climate variables reconstructed were mean temperature of the coldest month (T c ), growing degree days above 5 °C (GDD), moisture availability expressed as the ratio of actual to equilibrium evapotranspiration (α), and P–E. The constraint improved the spatial coherency of the reconstructed palaeoclimate anomalies, especially for P–E. The reconstructions indicate clear spatial and seasonal patterns of Holocene climate change, which can provide a quantitative benchmark for the evaluation of palaeoclimate model simulations. Winter temperatures (T c ) were 1–3 K greater than present in the far N and NE of Europe, but 2–4 K less than present in the Mediterranean region. Summer warmth (GDD) was greater than present in NW Europe (by 400–800 K day at the highest elevations) and in the Alps, but >400 K day less than present at lower elevations in S Europe. P–E was 50–250 mm less than present in NW Europe and the Alps, but α was 10–15% greater than present in S Europe and P–E was 50–200 mm greater than present in S and E Europe.
Resumo:
A one-dimensional surface energy-balance lake model, coupled to a thermodynamic model of lake ice, is used to simulate variations in the temperature of and evaporation from three Estonian lakes: Karujärv, Viljandi and Kirjaku. The model is driven by daily climate data, derived by cubic-spline interpolation from monthly mean data, and was run for periods of 8 years (Kirjaku) up to 30 years (Viljandi). Simulated surface water temperature is in good agreement with observations: mean differences between simulated and observed temperatures are from −0.8°C to +0.1°C. The simulated duration of snow and ice cover is comparable with observed. However, the model generally underpredicts ice thickness and overpredicts snow depth. Sensitivity analyses suggest that the model results are robust across a wide range (0.1–2.0 m−1) of lake extinction coefficient: surface temperature differs by less than 0.5°C between extreme values of the extinction coefficient. The model results are more sensitive to snow and ice albedos. However, changing the snow (0.2–0.9) and ice (0.15–0.55) albedos within realistic ranges does not improve the simulations of snow depth and ice thickness. The underestimation of ice thickness is correlated with the overestimation of snow cover, since a thick snow layer insulates the ice and limits ice formation. The overestimation of snow cover results from the assumption that all the simulated winter precipitation occurs as snow, a direct consequence of using daily climate data derived by interpolation from mean monthly data.
Resumo:
The magnetoviscous effect, change in viscosity with change in magnetic field strength, and the anisotropy of magnetoviscous effect, change in viscosity with orientation of magnetic field, have been a focus of interest since four decades. A satisfactory understanding of the microscopic origin of anisotropy of magnetoviscous effect in magnetic fluids is still a matter of debate and a field of intense research. Here, we present an extensive simulation study to understand the relation between the anisotropy of magnetoviscous effect and the underlying change in micro-structures of ferrofluids. Our results indicate that field-induced chain-like structures respond very differently depending on their orientation relative to the direction of an externally applied shear flow, which leads to a pronounced anisotropy of viscosity. In this work, we focus on three exemplary values of dipolar interaction strengths which correspond to weak, intermediate and strong interactions between dipolar colloidal particles. We compare our simulation results with an experimental study on cobalt-based ferrofluids as well as with an existing theoretical model called the chain model. A non-monotonic behaviour in the anisotropy of magnetoviscous effect is observed with increasing dipolar interaction strength and is explained in terms of micro-structure formation.
Resumo:
Increasing optical depth poleward of 45° is a robust response to warming in global climate models. Much of this cloud optical depth increase has been hypothesized to be due to transitions from ice-dominated to liquid-dominated mixed-phase cloud. In this study, the importance of liquid-ice partitioning for the optical depth feedback is quantified for 19 Coupled Model Intercomparison Project Phase 5 models. All models show a monotonic partitioning of ice and liquid as a function of temperature, but the temperature at which ice and liquid are equally mixed (the glaciation temperature) varies by as much as 40 K across models. Models that have a higher glaciation temperature are found to have a smaller climatological liquid water path (LWP) and condensed water path and experience a larger increase in LWP as the climate warms. The ice-liquid partitioning curve of each model may be used to calculate the response of LWP to warming. It is found that the repartitioning between ice and liquid in a warming climate contributes at least 20% to 80% of the increase in LWP as the climate warms, depending on model. Intermodel differences in the climatological partitioning between ice and liquid are estimated to contribute at least 20% to the intermodel spread in the high-latitude LWP response in the mixed-phase region poleward of 45°S. It is hypothesized that a more thorough evaluation and constraint of global climate model mixed-phase cloud parameterizations and validation of the total condensate and ice-liquid apportionment against observations will yield a substantial reduction in model uncertainty in the high-latitude cloud response to warming.
Resumo:
The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents.
Resumo:
Eddy covariance has been used in urban areas to evaluate the net exchange of CO2 between the surface and the atmosphere. Typically, only the vertical flux is measured at a height 2–3 times that of the local roughness elements; however, under conditions of relatively low instability, CO2 may accumulate in the airspace below the measurement height. This can result in inaccurate emissions estimates if the accumulated CO2 drains away or is flushed upwards during thermal expansion of the boundary layer. Some studies apply a single height storage correction; however, this requires the assumption that the response of the CO2 concentration profile to forcing is constant with height. Here a full seasonal cycle (7th June 2012 to 3rd June 2013) of single height CO2 storage data calculated from concentrations measured at 10 Hz by open path gas analyser are compared to a data set calculated from a concurrent switched vertical profile measured (2 Hz, closed path gas analyser) at 10 heights within and above a street canyon in central London. The assumption required for the former storage determination is shown to be invalid. For approximately regular street canyons at least one other measurement is required. Continuous measurements at fewer locations are shown to be preferable to a spatially dense, switched profile, as temporal interpolation is ineffective. The majority of the spectral energy of the CO2 storage time series was found to be between 0.001 and 0.2 Hz (500 and 5 s respectively); however, sampling frequencies of 2 Hz and below still result in significantly lower CO2 storage values. An empirical method of correcting CO2 storage values from under-sampled time series is proposed.
Resumo:
1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.