160 resultados para Girometti, Giuseppe.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several methods for assessing the sustainability of agricultural systems have been developed. These methods do not fully: (i) take into account the multi‐functionality of agriculture; (ii) include multidimensionality; (iii) utilize and implement the assessment knowledge; and (iv) identify conflicting goals and trade‐offs. This paper reviews seven recently developed multidisciplinary indicator‐based assessment methods with respect to their contribution to these shortcomings. All approaches include (1) normative aspects such as goal setting, (2) systemic aspects such as a specification of scale of analysis, (3) a reproducible structure of the approach. The approaches can be categorized into three typologies. The top‐down farm assessments focus on field or farm assessment. They have a clear procedure for measuring the indicators and assessing the sustainability of the system, which allows for benchmarking across farms. The degree of participation is low, potentially affecting the implementation of the results negatively. The top‐down regional assessment assesses the on‐farm and the regional effects. They include some participation to increase acceptance of the results. However, they miss the analysis of potential trade‐offs. The bottom‐up, integrated participatory or transdisciplinary approaches focus on a regional scale. Stakeholders are included throughout the whole process assuring the acceptance of the results and increasing the probability of implementation of developed measures. As they include the interaction between the indicators in their system representation, they allow for performing a trade‐off analysis. The bottom‐up, integrated participatory or transdisciplinary approaches seem to better overcome the four shortcomings mentioned above.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The misuse of Personal Protective Equipment results in health risk among smallholders in developing countries, and education is often proposed to promote safer practices. However, evidence point to limited effects of education. This paper presents a System Dynamics model which allows the identification of risk-minimizing policies for behavioural change. The model is based on the IAC framework and survey data. It represents farmers' decision-making from an agent-oriented standpoint. The most successful intervention strategy was the one which intervened in the long term, targeted key stocks in the systems and was diversified. However, the results suggest that, under these conditions, no policy is able to trigger a self sustaining behavioural change. Two implementation approaches were suggested by experts. One, based on constant social control, corresponds to a change of the current model's parameters. The other, based on participation, would lead farmers to new thinking, i.e. changes in their decision-making structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the coming decades, the Mediterranean region is expected to experience various climate impacts with negative consequences on agricultural systems and which will cause uneven reductions in agricultural production. By and large, the impacts of climate change on Mediterranean agriculture will be heavier for southern areas of the region. This unbalanced distribution of negative impacts underscores the significance and role of ethics in such a context of analysis. Consequently, the aim of this article is to justify and develop an ethical approach to agricultural adaptation in the Mediterranean and to derive the consequent implications for adaptation policy in the region. In particular, we define an index of adaptive capacity for the agricultural systems of the Mediterranean region on whose basis it is possible to group its different sub-regions, and we provide an overview of the suitable adaptation actions and policies for the sub-regions identified. We then vindicate and put forward an ethical approach to agricultural adaptation, highlighting the implications for the Mediterranean region and the limitations of such an ethical framework. Finally, we emphasize the broader potential of ethics for agricultural adaptation policy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A poplar short rotation coppice (SRC) grown for the production of bioenergy can combine carbon (C) storage with fossil fuel substitution. Here, we summarize the responses of a poplar (Populus) plantation to 6 yr of free air CO2 enrichment (POP/EUROFACE consisting of two rotation cycles). We show that a poplar plantation growing in nonlimiting light, nutrient and water conditions will significantly increase its productivity in elevated CO2 concentrations ([CO2]). Increased biomass yield resulted from an early growth enhancement and photosynthesis did not acclimate to elevated [CO2]. Sufficient nutrient availability, increased nitrogen use efficiency (NUE) and the large sink capacity of poplars contributed to the sustained increase in C uptake over 6 yr. Additional C taken up in high [CO2] was mainly invested into woody biomass pools. Coppicing increased yield by 66% and partly shifted the extra C uptake in elevated [CO2] to above-ground pools, as fine root biomass declined and its [CO2] stimulation disappeared. Mineral soil C increased equally in ambient and elevated [CO2] during the 6 yr experiment. However, elevated [CO2] increased the stabilization of C in the mineral soil. Increased productivity of a poplar SRC in elevated [CO2] may allow shorter rotation cycles, enhancing the viability of SRC for biofuel production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models used in neoclassical economics assume human behaviour to be purely rational. On the other hand, models adopted in social and behavioural psychology are founded on the ‘black box’ of human cognition. In view of these observations, this paper aims at bridging this gap by introducing psychological constructs in the well established microeconomic framework of choice behaviour based on random utility theory. In particular, it combines constructs developed employing Ajzen’s theory of planned behaviour with Lancaster’s theory of consumer demand for product characteristics to explain stated preferences over certified animal-friendly foods. To reach this objective a web survey was administered in the largest five EU-25 countries: France, Germany, Italy, Spain and the UK. Findings identify some salient cross-cultural differences between northern and southern Europe and suggest that psychological constructs developed using the Ajzen model are useful in explaining heterogeneity of preferences. Implications for policy makers and marketers involved with certified animal-friendly foods are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Peer-to-Peer (P2P) networks, it is often desirable to assign node IDs which preserve locality relationships in the underlying topology. Node locality can be embedded into node IDs by utilizing a one dimensional mapping by a Hilbert space filling curve on a vector of network distances from each node to a subset of reference landmark nodes within the network. However this approach is fundamentally limited because while robustness and accuracy might be expected to improve with the number of landmarks, the effectiveness of 1 dimensional Hilbert Curve mapping falls for the curse of dimensionality. This work proposes an approach to solve this issue using Landmark Multidimensional Scaling (LMDS) to reduce a large set of landmarks to a smaller set of virtual landmarks. This smaller set of landmarks has been postulated to represent the intrinsic dimensionality of the network space and therefore a space filling curve applied to these virtual landmarks is expected to produce a better mapping of the node ID space. The proposed approach, the Virtual Landmarks Hilbert Curve (VLHC), is particularly suitable for decentralised systems like P2P networks. In the experimental simulations the effectiveness of the methods is measured by means of the locality preservation derived from node IDs in terms of latency to nearest neighbours. A variety of realistic network topologies are simulated and this work provides strong evidence to suggest that VLHC performs better than either Hilbert Curves or LMDS use independently of each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gossip (or Epidemic) protocols have emerged as a communication and computation paradigm for large-scale networked systems. These protocols are based on randomised communication, which provides probabilistic guarantees on convergence speed and accuracy. They also provide robustness, scalability, computational and communication efficiency and high stability under disruption. This work presents a novel Gossip protocol named Symmetric Push-Sum Protocol for the computation of global aggregates (e.g., average) in decentralised and asynchronous systems. The proposed approach combines the simplicity of the push-based approach and the efficiency of the push-pull schemes. The push-pull schemes cannot be directly employed in asynchronous systems as they require synchronous paired communication operations to guarantee their accuracy. Although push schemes guarantee accuracy even with asynchronous communication, they suffer from a slower and unstable convergence. Symmetric Push- Sum Protocol does not require synchronous communication and achieves a convergence speed similar to the push-pull schemes, while keeping the accuracy stability of the push scheme. In the experimental analysis, we focus on computing the global average as an important class of node aggregation problems. The results have confirmed that the proposed method inherits the advantages of both other schemes and outperforms well-known state of the art protocols for decentralized Gossip-based aggregation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrated simulation models can be useful tools in farming system research. This chapter reviews three commonly used approaches, i.e. linear programming, system dynamics and agent-based models. Applications of each approach are presented and strengths and drawbacks discussed. We argue that, despite some challenges, mainly related to the integration of different approaches, model validation and the representation of human agents, integrated simulation models contribute important insights to the analysis of farming systems. They help unravelling the complex and dynamic interactions and feedbacks among bio-physical, socio-economic, and institutional components across scales and levels in farming systems. In addition, they can provide a platform for integrative research, and can support transdisciplinary research by functioning as learning platforms in participatory processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Issues pertaining to consumer understanding of food health claims are complex and difficult to disentangle because there is a surprising lack of multidisciplinary research aimed at evaluating how consumers are influenced by factors impacting on the evaluation process. In the EU, current legislation is designed to protect consumers from misleading and false claims but there is much debate about the concept of the ‘average consumer’ referred to in the legislation. This review provides an overview of the current legislative framework, discusses the concept of the ‘average consumer’ and brings together findings on consumer understanding from an international perspective. It examines factors related to the personal characteristics of individuals such as socio-demographic status, knowledge, and attitudes, and factors pertaining to food and food supplement products such as the wording of claims and the communication of the strength and consistency of the scientific evidence. As well as providing insights for future research, the conclusions highlight the importance of enhancing the communication of scientific evidence to improve consumer understanding of food health claims.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.