77 resultados para multi-attribute analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of single forcing runs from CMIP5 (the fifth Coupled Model Intercomparison Project) simulations shows that the mid-twentieth century temperature hiatus, and the coincident decrease in precipitation, is likely to have been influenced strongly by anthropogenic aerosol forcing. Models that include a representation of the indirect effect of aerosol better reproduce inter-decadal variability in historical global-mean near-surface temperatures, particularly the cooling in the 1950s and 1960s, compared to models with representation of the aerosol direct effect only. Models with the indirect effect also show a more pronounced decrease in precipitation during this period, which is in better agreement with observations, and greater inter-decadal variability in the inter-hemispheric temperature difference. This study demonstrates the importance of representing aerosols, and their indirect effects, in general circulation models, and suggests that inter-model diversity in aerosol burden and representation of aerosol–cloud interaction can produce substantial variation in simulations of climate variability on multi decadal timescales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the changing environment becomes the main challenge for most of organizations, since they have to evaluate proper policies to adapt to the environment. In this paper, we propose a multi-agent simulation method to evaluate policies based on complex adaptive system theory. Furthermore, we propose a semiotic EDA (Epistemic, Deontic, Axiological) agent model to simulate agent's behavior in the system by incorporating the social norms reflecting the policy. A case study is also provided to validate our approach. Our research present better adaptability and validity than the qualitative analysis and experiment approach and the semiotic agent model provides high creditability to simulate agents' behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The link between the Pacific/North American pattern (PNA) and the North Atlantic Oscillation (NAO) is investigated in reanalysis data (NCEP, ERA40) and multi-century CGCM runs for present day climate using three versions of the ECHAM model. PNA and NAO patterns and indices are determined via rotated principal component analysis on monthly mean 500 hPa geopotential height fields using the varimax criteria. On average, the multi-century CGCM simulations show a significant anti-correlation between PNA and NAO. Further, multi-decadal periods with significantly enhanced (high anti-correlation, active phase) or weakened (low correlations, inactive phase) coupling are found in all CGCMs. In the simulated active phases, the storm track activity near Newfoundland has a stronger link with the PNA variability than during the inactive phases. On average, the reanalysis datasets show no significant anti-correlation between PNA and NAO indices, but during the sub-period 1973–1994 a significant anti-correlation is detected, suggesting that the present climate could correspond to an inactive period as detected in the CGCMs. An analysis of possible physical mechanisms suggests that the link between the patterns is established by the baroclinic waves forming the North Atlantic storm track. The geopotential height anomalies associated with negative PNA phases induce an increased advection of warm and moist air from the Gulf of Mexico and cold air from Canada. Both types of advection contribute to increase baroclinicity over eastern North America and also to increase the low level latent heat content of the warm air masses. Thus, growth conditions for eddies at the entrance of the North Atlantic storm track are enhanced. Considering the average temporal development during winter for the CGCM, results show an enhanced Newfoundland storm track maximum in the early winter for negative PNA, followed by a downstream enhancement of the Atlantic storm track in the subsequent months. In active (passive) phases, this seasonal development is enhanced (suppressed). As the storm track over the central and eastern Atlantic is closely related to the NAO variability, this development can be explained by the shift of the NAO index to more positive values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boreal winter wind storm situations over Central Europe are investigated by means of an objective cluster analysis. Surface data from the NCEP-Reanalysis and ECHAM4/OPYC3-climate change GHG simulation (IS92a) are considered. To achieve an optimum separation of clusters of extreme storm conditions, 55 clusters of weather patterns are differentiated. To reduce the computational effort, a PCA is initially performed, leading to a data reduction of about 98 %. The clustering itself was computed on 3-day periods constructed with the first six PCs using "k-means" clustering algorithm. The applied method enables an evaluation of the time evolution of the synoptic developments. The climate change signal is constructed by a projection of the GCM simulation on the EOFs attained from the NCEP-Reanalysis. Consequently, the same clusters are obtained and frequency distributions can be compared. For Central Europe, four primary storm clusters are identified. These clusters feature almost 72 % of the historical extreme storms events and add only to 5 % of the total relative frequency. Moreover, they show a statistically significant signature in the associated wind fields over Europe. An increased frequency of Central European storm clusters is detected with enhanced GHG conditions, associated with an enhancement of the pressure gradient over Central Europe. Consequently, more intense wind events over Central Europe are expected. The presented algorithm will be highly valuable for the analysis of huge data amounts as is required for e.g. multi-model ensemble analysis, particularly because of the enormous data reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To investigate the sociodemographic determinants of diet quality of the elderly in four EU countries. Design: Cross-sectional study. For each country, a regression was performed of a multidimensional index of dietary quality v. sociodemographic variables. Setting In Finland, Finnish Household Budget Survey (1998 and 2006); in Sweden, SNAC-K (2001–2004); in the UK, Expenditure & Food Survey (2006–07); in Italy, Multi-purpose Survey of Daily Life (2009). Subjects: One- and two-person households of over-50s (Finland, n 2994; UK, n 4749); over-50 s living alone or in two-person households (Italy, n 7564); over-60 s (Sweden, n 2023). Results: Diet quality among the EU elderly is both low on average and heterogeneous across individuals. The regression models explained a small but significant part of the observed heterogeneity in diet quality. Resource availability was associated with diet quality either negatively (Finland and UK) or in a non-linear or non-statistically significant manner (Italy and Sweden), as was the preference for food parameter. Education, not living alone and female gender were characteristics positively associated with diet quality with consistency across the four countries, unlike socio-professional status, age and seasonality. Regional differences within countries persisted even after controlling for the other sociodemographic variables. Conclusions: Poor dietary choices among the EU elderly were not caused by insufficient resources and informational measures could be successful in promoting healthy eating for healthy ageing. On the other hand, food habits appeared largely set in the latter part of life, with age and retirement having little influence on the healthiness of dietary choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eddy covariance measurements of the turbulent sensible heat, latent heat and carbon dioxide fluxes for 12 months (2011–2012) are reported for the first time for a suburban area in the UK. The results from Swindon are comparable to suburban studies of similar surface cover elsewhere but reveal large seasonal variability. Energy partitioning favours turbulent sensible heat during summer (midday Bowen ratio 1.4–1.6) and latent heat in winter (0.05–0.7). A significant proportion of energy is stored (and released) by the urban fabric and the estimated anthropogenic heat flux is small but non-negligible (0.5–0.9 MJ m−2 day−1). The sensible heat flux is negative at night and for much of winter daytimes, reflecting the suburban nature of the site (44% vegetation) and relatively low built fraction (16%). Latent heat fluxes appear to be water limited during a dry spring in both 2011 and 2012, when the response of the surface to moisture availability can be seen on a daily timescale. Energy and other factors are more relevant controls at other times; at night the wind speed is important. On average, surface conductance follows a smooth, asymmetrical diurnal course peaking at around 6–9 mm s−1, but values are larger and highly variable in wet conditions. The combination of natural (vegetative) and anthropogenic (emission) processes is most evident in the temporal variation of the carbon flux: significant photosynthetic uptake is seen during summer, whilst traffic and building emissions explain peak release in winter (9.5 g C m−2 day−1). The area is a net source of CO2 annually. Analysis by wind direction highlights the role of urban vegetation in promoting evapotranspiration and offsetting CO2 emissions, especially when contrasted against peak traffic emissions from sectors with more roads. Given the extent of suburban land use, these results have important implications for understanding urban energy, water and carbon dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the evolution of knowledge management from the initial knowledge migration stage, through adaptation and creation, to the reverse knowledge migration stage in international joint ventures (IJVs). While many studies have analyzed these stages (mostly focusing on knowledge transfer), we investigated the path-dependent nature of knowledge flow in IJVs. The results from the empirical analysis based on a survey of 136 Korean parent companies of IJVs reveal that knowledge management in IJVs follows a sequential, multi-stage process, and that the knowledge transferred from parents to IJVs must first be adapted within its new environment before it reaches the creation stage. We also found that only created knowledge is transferred back to parents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate controls fire regimes through its influence on the amount and types of fuel present and their dryness. CO2 concentration constrains primary production by limiting photosynthetic activity in plants. However, although fuel accumulation depends on biomass production, and hence on CO2 concentration, the quantitative relationship between atmospheric CO2 concentration and biomass burning is not well understood. Here a fire-enabled dynamic global vegetation model (the Land surface Processes and eXchanges model, LPX) is used to attribute glacial–interglacial changes in biomass burning to an increase in CO2, which would be expected to increase primary production and therefore fuel loads even in the absence of climate change, vs. climate change effects. Four general circulation models provided last glacial maximum (LGM) climate anomalies – that is, differences from the pre-industrial (PI) control climate – from the Palaeoclimate Modelling Intercomparison Project Phase~2, allowing the construction of four scenarios for LGM climate. Modelled carbon fluxes from biomass burning were corrected for the model's observed prediction biases in contemporary regional average values for biomes. With LGM climate and low CO2 (185 ppm) effects included, the modelled global flux at the LGM was in the range of 1.0–1.4 Pg C year-1, about a third less than that modelled for PI time. LGM climate with pre-industrial CO2 (280 ppm) yielded unrealistic results, with global biomass burning fluxes similar to or even greater than in the pre-industrial climate. It is inferred that a substantial part of the increase in biomass burning after the LGM must be attributed to the effect of increasing CO2 concentration on primary production and fuel load. Today, by analogy, both rising CO2 and global warming must be considered as risk factors for increasing biomass burning. Both effects need to be included in models to project future fire risks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cover crops are sown to provide a number of ecosystem services including nutrient management, mitigation of diffuse pollution, improving soil structure and organic matter content, weed suppression, nitrogen fixation and provision of resources for biodiversity. Although the decision to sow a cover crop may be driven by a desire to achieve just one of these objectives, the diversity of cover crops species and mixtures available means that there is potential to combine a number of ecosystem services within the same crop and growing season. Designing multi-functional cover crops would potentially help to reconcile the often conflicting agronomic and environmental agendas and contribute to the optimal use of land. We present a framework for integrating multiple ecosystem services delivered by cover crops that aims to design a mixture of species with complementary growth habit and functionality. The optimal number and identity of species will depend on the services included in the analysis, the functional space represented by the available species pool and the community dynamics of the crop in terms of dominance and co-existence. Experience from a project that applied the framework to fertility building leys in organic systems demonstrated its potential and emphasised the importance of the initial choice of species to include in the analysis

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the origin of the properties of metal-supported metal thin films is important for the rational design of bimetallic catalysts and other applications, but it is generally difficult to separate effects related to strain from those arising from interface interactions. Here we use density functional (DFT) theory to examine the structure and electronic behavior of few-layer palladium films on the rhenium (0001) surface, where there is negligible interfacial strain and therefore other effects can be isolated. Our DFT calculations predict stacking sequences and interlayer separations in excellent agreement with quantitative low-energy electron diffraction experiments. By theoretically simulating the Pd core-level X-ray photoemission spectra (XPS) of the films, we are able to interpret and assign the basic features of both low-resolution and high-resolution XPS measurements. The core levels at the interface shift to more negative energies, rigidly following the shifts in the same direction of the valence d-band center. We demonstrate that the valence band shift at the interface is caused by charge transfer from Re to Pd, which occurs mainly to valence states of hybridized s-p character rather than to the Pd d-band. Since the d-band filling is roughly constant, there is a correlation between the d-band center shift and its bandwidth. The resulting effect of this charge transfer on the valence d-band is thus analogous to the application of a lateral compressive strain on the adlayers. Our analysis suggests that charge transfer should be considered when describing the origin of core and valence band shifts in other metal / metal adlayer systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we assess the abilities of a new electromagnetic (EM) system, the CMD Mini-Explorer, for prospecting of archaeological features in Ireland and the UK. The Mini-Explorer is an EM probe which is primarily aimed at the environmental/geological prospecting market for the detection of pipes and geology. It has long been evident from the use of other EM devices that such an instrument might be suitable for shallow soil studies and applicable for archaeological prospecting. Of particular interest for the archaeological surveyor is the fact that the Mini-Explorer simultaneously obtains both quadrature (‘conductivity’) and in-phase (relative to ‘magnetic susceptibility’) data from three depth levels. As the maximum depth range is probably about 1.5 m, a comprehensive analysis of the subsoil within that range is possible. As with all EM devices the measurements require no contact with the ground, thereby negating the problem of high contact resistance that often besets earth resistance data during dry spells. The use of the CMD Mini-Explorer at a number of sites has demonstrated that it has the potential to detect a range of archaeological features and produces high-quality data that are comparable in quality to those obtained from standard earth resistance and magnetometer techniques. In theory the ability to measure two phenomena at three depths suggests that this type of instrument could reduce the number of poor outcomes that are the result of single measurement surveys. The high success rate reported here in the identification of buried archaeology using a multi-depth device that responds to the two most commonly mapped geophysical phenomena has implications for evaluation style surveys. Copyright © 2013 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Although a large number of randomized controlled trials (RCTs) have examined the impact of the n-3 (ω-3) fatty acids EPA (20:5n-3) and DHA (22:6n-3) on blood pressure and vascular function, the majority have used doses of EPA+DHA of > 3 g per d,which are unlikely to be achieved by diet manipulation. Objective: The objective was to examine, using a retrospective analysis from a multi-center RCT, the impact of recommended, dietary achievable EPA+DHA intakes on systolic and diastolic blood pressure and microvascular function in UK adults. Design: Healthy men and women (n = 312) completed a double-blind, placebo-controlled RCT consuming control oil, or fish oil providing 0.7 g or 1.8 g EPA+DHA per d in random order each for 8 wk. Fasting blood pressure and microvascular function (using Laser Doppler Iontophoresis) were assessed and plasma collected for the quantification of markers of vascular function. Participants were retrospectively genotyped for the eNOS rs1799983 variant. Results: No impact of n-3 fatty acid treatment or any treatment * eNOS genotype interactions were evident in the group as a whole for any of the clinical or biochemical outcomes. Assessment of response according to hypertension status at baseline indicated a significant (P=0.046) fish oil-induced reduction (mean 5 mmHg) in systolic blood pressure specifically in those with isolated systolic hypertension (n=31). No dose response was observed. Conclusions: These findings indicate that, in those with isolated systolic hypertension, daily doses of EPA+DHA as low as 0.7 g bring about clinically meaningful blood pressure reductions which, at a population level, would be associated with lower cardiovascular disease risk. Confirmation of findings in an RCT where participants are prospectively recruited on the basis of blood pressure status is required to draw definite conclusions. The Journal of Nutrition NUTRITION/2015/220475 Version 4