829 resultados para Multi-classifier systems
Resumo:
The integration of ecological principles into agricultural systems presents major opportunities for spreading risk at the crop and farm scale. This paper presents mechanisms by which diversity at several scales within the farming system can increase the stability of production. Diversity of above- and below-ground biota, but also genetic and phenotypic diversity within crops, has an essential role in safeguarding farm production. Novel mixtures of legume-grass leys have been shown to potentially provide significant benefits for pollinator and decomposer ecosystem services but to realise the greatest improvements carefully tailored farm management is needed such as mowing or grazing time, and the type and depth of cutivation. Complex farmland landscapes such as agroforestry systems have the potential to support pollinator abundance and diversity and spread risk across production enterprises. At the crop level, early results indicate that the vulnerability of pollen development, flowering and early grain set to abiotic stress can be ameliorated by managing flowering time through genotypic selection, and through the buffering effects of pollinators. Finally, the risk of sub-optimal quality in cereals can be mitigated through integration of near isogenic lines selected to escape specific abiotic stress events. We conclude that genotypic, phenotypic and community diversity can all be increased at multiple scales to enhance resilience in agricultural systems.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physics
Resumo:
Awareness of emerging situations in a dynamic operational environment of a robotic assistive device is an essential capability of such a cognitive system, based on its effective and efficient assessment of the prevailing situation. This allows the system to interact with the environment in a sensible (semi)autonomous / pro-active manner without the need for frequent interventions from a supervisor. In this paper, we report a novel generic Situation Assessment Architecture for robotic systems directly assisting humans as developed in the CORBYS project. This paper presents the overall architecture for situation assessment and its application in proof-of-concept Demonstrators as developed and validated within the CORBYS project. These include a robotic human follower and a mobile gait rehabilitation robotic system. We present an overview of the structure and functionality of the Situation Assessment Architecture for robotic systems with results and observations as collected from initial validation on the two CORBYS Demonstrators.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Cover crops are sown to provide a number of ecosystem services including nutrient management, mitigation of diffuse pollution, improving soil structure and organic matter content, weed suppression, nitrogen fixation and provision of resources for biodiversity. Although the decision to sow a cover crop may be driven by a desire to achieve just one of these objectives, the diversity of cover crops species and mixtures available means that there is potential to combine a number of ecosystem services within the same crop and growing season. Designing multi-functional cover crops would potentially help to reconcile the often conflicting agronomic and environmental agendas and contribute to the optimal use of land. We present a framework for integrating multiple ecosystem services delivered by cover crops that aims to design a mixture of species with complementary growth habit and functionality. The optimal number and identity of species will depend on the services included in the analysis, the functional space represented by the available species pool and the community dynamics of the crop in terms of dominance and co-existence. Experience from a project that applied the framework to fertility building leys in organic systems demonstrated its potential and emphasised the importance of the initial choice of species to include in the analysis
Resumo:
Understanding the origin of the properties of metal-supported metal thin films is important for the rational design of bimetallic catalysts and other applications, but it is generally difficult to separate effects related to strain from those arising from interface interactions. Here we use density functional (DFT) theory to examine the structure and electronic behavior of few-layer palladium films on the rhenium (0001) surface, where there is negligible interfacial strain and therefore other effects can be isolated. Our DFT calculations predict stacking sequences and interlayer separations in excellent agreement with quantitative low-energy electron diffraction experiments. By theoretically simulating the Pd core-level X-ray photoemission spectra (XPS) of the films, we are able to interpret and assign the basic features of both low-resolution and high-resolution XPS measurements. The core levels at the interface shift to more negative energies, rigidly following the shifts in the same direction of the valence d-band center. We demonstrate that the valence band shift at the interface is caused by charge transfer from Re to Pd, which occurs mainly to valence states of hybridized s-p character rather than to the Pd d-band. Since the d-band filling is roughly constant, there is a correlation between the d-band center shift and its bandwidth. The resulting effect of this charge transfer on the valence d-band is thus analogous to the application of a lateral compressive strain on the adlayers. Our analysis suggests that charge transfer should be considered when describing the origin of core and valence band shifts in other metal / metal adlayer systems.
Resumo:
The paper explores pollination from a multi level policy perspective and analyses the institutional fit and inter play of multi-faceted pollination-related policies. First, it asks what the major policies are that frame pollination at the EU level. Second, it explores the relationship between the EU policies and localised ways of understanding pollination. Addressed third is how the concept of ecosystem services can aid in under- standing the various ways of framing and governing the situation. The results show that the policy systems affecting pollination are abundant and that these systems create different kinds of pressure on stakeholders, at several levels of society. The local-level concerns are more about the loss of pollination services than about loss of pollinators. This points to the problem of fit between local activity driven by economic reasoning and biodiversity-driven EU policies. Here we see the concept of ecosystem services having some potential, since its operationalisation can combine economic and environmental considerations. Further- more, the analysis shows how, instead of formal institutions, it seems that social norms, habits, and motivation are the key to understanding and developing effective and attractive governance measures.
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
The first multi-model study to estimate the predictability of a boreal Sudden Stratospheric Warming (SSW) is performed using five NWP systems. During the 2012-2013 boreal winter, anomalous upward propagating planetary wave activity was observed towards the end of December, which followed by a rapid deceleration of the westerly circulation around 2 January 2013, and on 7 January 2013 the zonal mean zonal wind at 60°N and 10 hPa reversed to easterly. This stratospheric dynamical activity was followed by an equatorward shift of the tropospheric jet stream and by a high pressure anomaly over the North Atlantic, which resulted in severe cold conditions in the UK and Northern Europe. In most of the five models, the SSW event was predicted 10 days in advance. However, only some ensemble members in most of the models predicted weakening of westerly wind when the models were initialized 15 days in advance of the SSW. Further dynamical analysis of the SSW shows that this event was characterized by the anomalous planetary wave-1 amplification followed by the anomalous wave-2 amplification in the stratosphere, which resulted in a split vortex occurring between 6 January 2013 and 8 January 2013. The models have some success in reproducing wave-1 activity when initialized 15 days in advance, they but generally failed to produce the wave-2 activity during the final days of the event. Detailed analysis shows that models have reasonably good skill in forecasting tropospheric blocking features that stimulate wave-2 amplification in the troposphere, but they have limited skill in reproducing wave-2 amplification in the stratosphere.
Resumo:
This paper describes a novel on-line learning approach for radial basis function (RBF) neural network. Based on an RBF network with individually tunable nodes and a fixed small model size, the weight vector is adjusted using the multi-innovation recursive least square algorithm on-line. When the residual error of the RBF network becomes large despite of the weight adaptation, an insignificant node with little contribution to the overall system is replaced by a new node. Structural parameters of the new node are optimized by proposed fast algorithms in order to significantly improve the modeling performance. The proposed scheme describes a novel, flexible, and fast way for on-line system identification problems. Simulation results show that the proposed approach can significantly outperform existing ones for nonstationary systems in particular.
Resumo:
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Resumo:
The environment where galaxies are found heavily influences their evolution. Close groupings, like the ones in the cores of galaxy clusters or compact groups, evolve in ways far more dramatic than their isolated counterparts. We have conducted a multi-wavelength study of Hickson Compact Group 7 (HCG 7), consisting of four giant galaxies: three spirals and one lenticular. We use Hubble Space Telescope (HST) imaging to identify and characterize the young and old star cluster populations. We find young massive clusters (YMCs) mostly in the three spirals, while the lenticular features a large, unimodal population of globular clusters (GCs) but no detectable clusters with ages less than a few Gyr. The spatial and approximate age distributions of the similar to 300 YMCs and similar to 150 GCs thus hint at a regular star formation history in the group over a Hubble time. While at first glance the HST data show the galaxies as undisturbed, our deep ground-based, wide-field imaging that extends the HST coverage reveals faint signatures of stellar material in the intragroup medium (IGM). We do not, however, detect the IGM in H I or Chandra X-ray observations, signatures that would be expected to arise from major mergers. Despite this fact, we find that the H I gas content of the individual galaxies and the group as a whole are a third of the expected abundance. The appearance of quiescence is challenged by spectroscopy that reveals an intense ionization continuum in one galaxy nucleus, and post-burst characteristics in another. Our spectroscopic survey of dwarf galaxy members yields a single dwarf elliptical galaxy in an apparent stellar tidal feature. Based on all this information, we suggest an evolutionary scenario for HCG 7, whereby the galaxies convert most of their available gas into stars without the influence of major mergers and ultimately result in a dry merger. As the conditions governing compact groups are reminiscent of galaxies at intermediate redshift, we propose that HCGs are appropriate for studying galaxy evolution at z similar to 1-2.
Resumo:
Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.
Resumo:
Magneto-capacitance was studied in narrow miniband GaAs/AlGaAs superlattices where quasi-two dimensional electrons revealed the integer quantum Hall effect. The interwell tunneling was shown to reduce the effect of the quantization of the density of states on the capacitance of the superlattices. In such case the minimum of the capacitance observed at the filling factor nu = 2 was attributed to the decrease of the electron compressibility due to the formation of the incompressible quantized Hall phase. In accord with the theory this phase was found strongly inhomogeneous. The incompressible fraction of the quantized Hall phase was demonstrated to rapidly disappear with the increasing temperature. (C) 2008 Elsevier B.V. All rights reserved.