986 resultados para Mastromarco, Giuseppe


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A poplar short rotation coppice (SRC) grown for the production of bioenergy can combine carbon (C) storage with fossil fuel substitution. Here, we summarize the responses of a poplar (Populus) plantation to 6 yr of free air CO2 enrichment (POP/EUROFACE consisting of two rotation cycles). We show that a poplar plantation growing in nonlimiting light, nutrient and water conditions will significantly increase its productivity in elevated CO2 concentrations ([CO2]). Increased biomass yield resulted from an early growth enhancement and photosynthesis did not acclimate to elevated [CO2]. Sufficient nutrient availability, increased nitrogen use efficiency (NUE) and the large sink capacity of poplars contributed to the sustained increase in C uptake over 6 yr. Additional C taken up in high [CO2] was mainly invested into woody biomass pools. Coppicing increased yield by 66% and partly shifted the extra C uptake in elevated [CO2] to above-ground pools, as fine root biomass declined and its [CO2] stimulation disappeared. Mineral soil C increased equally in ambient and elevated [CO2] during the 6 yr experiment. However, elevated [CO2] increased the stabilization of C in the mineral soil. Increased productivity of a poplar SRC in elevated [CO2] may allow shorter rotation cycles, enhancing the viability of SRC for biofuel production.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Models used in neoclassical economics assume human behaviour to be purely rational. On the other hand, models adopted in social and behavioural psychology are founded on the ‘black box’ of human cognition. In view of these observations, this paper aims at bridging this gap by introducing psychological constructs in the well established microeconomic framework of choice behaviour based on random utility theory. In particular, it combines constructs developed employing Ajzen’s theory of planned behaviour with Lancaster’s theory of consumer demand for product characteristics to explain stated preferences over certified animal-friendly foods. To reach this objective a web survey was administered in the largest five EU-25 countries: France, Germany, Italy, Spain and the UK. Findings identify some salient cross-cultural differences between northern and southern Europe and suggest that psychological constructs developed using the Ajzen model are useful in explaining heterogeneity of preferences. Implications for policy makers and marketers involved with certified animal-friendly foods are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. This work proposes a fully decentralised algorithm (Epidemic K-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art distributed K-Means algorithms based on sampling methods. The experimental analysis confirms that the proposed algorithm is a practical and accurate distributed K-Means implementation for networked systems of very large and extreme scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Peer-to-Peer (P2P) networks, it is often desirable to assign node IDs which preserve locality relationships in the underlying topology. Node locality can be embedded into node IDs by utilizing a one dimensional mapping by a Hilbert space filling curve on a vector of network distances from each node to a subset of reference landmark nodes within the network. However this approach is fundamentally limited because while robustness and accuracy might be expected to improve with the number of landmarks, the effectiveness of 1 dimensional Hilbert Curve mapping falls for the curse of dimensionality. This work proposes an approach to solve this issue using Landmark Multidimensional Scaling (LMDS) to reduce a large set of landmarks to a smaller set of virtual landmarks. This smaller set of landmarks has been postulated to represent the intrinsic dimensionality of the network space and therefore a space filling curve applied to these virtual landmarks is expected to produce a better mapping of the node ID space. The proposed approach, the Virtual Landmarks Hilbert Curve (VLHC), is particularly suitable for decentralised systems like P2P networks. In the experimental simulations the effectiveness of the methods is measured by means of the locality preservation derived from node IDs in terms of latency to nearest neighbours. A variety of realistic network topologies are simulated and this work provides strong evidence to suggest that VLHC performs better than either Hilbert Curves or LMDS use independently of each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gossip (or Epidemic) protocols have emerged as a communication and computation paradigm for large-scale networked systems. These protocols are based on randomised communication, which provides probabilistic guarantees on convergence speed and accuracy. They also provide robustness, scalability, computational and communication efficiency and high stability under disruption. This work presents a novel Gossip protocol named Symmetric Push-Sum Protocol for the computation of global aggregates (e.g., average) in decentralised and asynchronous systems. The proposed approach combines the simplicity of the push-based approach and the efficiency of the push-pull schemes. The push-pull schemes cannot be directly employed in asynchronous systems as they require synchronous paired communication operations to guarantee their accuracy. Although push schemes guarantee accuracy even with asynchronous communication, they suffer from a slower and unstable convergence. Symmetric Push- Sum Protocol does not require synchronous communication and achieves a convergence speed similar to the push-pull schemes, while keeping the accuracy stability of the push scheme. In the experimental analysis, we focus on computing the global average as an important class of node aggregation problems. The results have confirmed that the proposed method inherits the advantages of both other schemes and outperforms well-known state of the art protocols for decentralized Gossip-based aggregation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Integrated simulation models can be useful tools in farming system research. This chapter reviews three commonly used approaches, i.e. linear programming, system dynamics and agent-based models. Applications of each approach are presented and strengths and drawbacks discussed. We argue that, despite some challenges, mainly related to the integration of different approaches, model validation and the representation of human agents, integrated simulation models contribute important insights to the analysis of farming systems. They help unravelling the complex and dynamic interactions and feedbacks among bio-physical, socio-economic, and institutional components across scales and levels in farming systems. In addition, they can provide a platform for integrative research, and can support transdisciplinary research by functioning as learning platforms in participatory processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Issues pertaining to consumer understanding of food health claims are complex and difficult to disentangle because there is a surprising lack of multidisciplinary research aimed at evaluating how consumers are influenced by factors impacting on the evaluation process. In the EU, current legislation is designed to protect consumers from misleading and false claims but there is much debate about the concept of the ‘average consumer’ referred to in the legislation. This review provides an overview of the current legislative framework, discusses the concept of the ‘average consumer’ and brings together findings on consumer understanding from an international perspective. It examines factors related to the personal characteristics of individuals such as socio-demographic status, knowledge, and attitudes, and factors pertaining to food and food supplement products such as the wording of claims and the communication of the strength and consistency of the scientific evidence. As well as providing insights for future research, the conclusions highlight the importance of enhancing the communication of scientific evidence to improve consumer understanding of food health claims.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes and demonstrates an approach, Skilloscopy, to the assessment of decision makers. In an increasingly sophisticated, connected and information-rich world, decision making is becoming both more important and more difficult. At the same time, modelling decision-making on computers is becoming more feasible and of interest, partly because the information-input to those decisions is increasingly on record. The aims of Skilloscopy are to rate and rank decision makers in a domain relative to each other: the aims do not include an analysis of why a decision is wrong or suboptimal, nor the modelling of the underlying cognitive process of making the decisions. In the proposed method a decision-maker is characterised by a probability distribution of their competence in choosing among quantifiable alternatives. This probability distribution is derived by classic Bayesian inference from a combination of prior belief and the evidence of the decisions. Thus, decision-makers’ skills may be better compared, rated and ranked. The proposed method is applied and evaluated in the gamedomain of Chess. A large set of games by players across a broad range of the World Chess Federation (FIDE) Elo ratings has been used to infer the distribution of players’ rating directly from the moves they play rather than from game outcomes. Demonstration applications address questions frequently asked by the Chess community regarding the stability of the Elo rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The method of Skilloscopy may be applied in any decision domain where the value of the decision-options can be quantified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The quadridentate N-heterocyclic ligand 6-(5,5,8,8-tetramethyl-5,6,7,8-tetrahydro-1,2,4-benzotriazin- 3-yl)-2,2′ : 6′,2′′-terpyridine (CyMe4-hemi-BTBP) has been synthesized and its interactions with Am(III),U(VI), Ln(III) and some transition metal cations have been evaluated by X-ray crystallographic analysis, Am(III)/Eu(III) solvent extraction experiments, UVabsorption spectrophotometry, NMR studies and ESI-MS. Structures of 1 : 1 complexes with Eu(III), Ce(III) and the linear uranyl (UO2 2+) ion were obtained by X-ray crystallographic analysis, and they showed similar coordination behavior to related BTBP complexes. In methanol, the stability constants of the Ln(III) complexes are slightly lower than those of the analogous quadridentate bis-triazine BTBP ligands, while the stability constant for the Yb(III)complex is higher. 1H NMR titrations and ESI-MS with lanthanide nitrates showed that the ligand forms only 1 : 1 complexes with Eu(III), Ce(III) and Yb(III), while both 1 : 1 and 1 : 2 complexes were formed with La(III) and Y(III) in acetonitrile. A mixture of isomeric chiral 2 : 2 helical complexes was formed with Cu(I), with a slight preference (1.4 : 1) for a single directional isomer. In contrast, a 1 : 1 complex was observed with the larger Ag(I) ion. The ligand was unable to extract Am(III) or Eu(III) from nitric acid solutions into 1-octanol, except in the presence of a synergist at low acidity. The results show that the presence of two outer 1,2,4-triazine rings is required for the efficient extraction and separation of An(III)from Ln(III) by quadridentate N-donor ligands.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods for assessing the sustainability of agricultural systems do often not fully (i) take into account the multifunctionality of agriculture, (ii) include multidimensionality, (iii) utilize and implement the assessment knowledge and (iv) identify conflicting goals and trade-offs. This chapter reviews seven recently developed multidisciplinary indicator-based assessment methods with respect to their contribution to these shortcomings. All approaches include (1) normative aspects such as goal setting, (2) systemic aspects such as a specification of scale of analysis and (3) a reproducible structure of the approach. The approaches can be categorized into three typologies: first, top-down farm assessments, which focus on field or farm assessment; second, top-down regional assessments, which assess the on-farm and the regional effects; and third, bottom-up, integrated participatory or transdisciplinary approaches, which focus on a regional scale. Our analysis shows that the bottom-up, integrated participatory or transdisciplinary approaches seem to better overcome the four shortcomings mentioned above.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“La questione di Trieste”, ovvero la questione del confine italo-yugoslavo all’indomani della seconda guerra mondiale costituisce da lungo tempo oggetto di attenzione e di esame da parte della storiografia italiana e straniera. Con alcune importanti eccezioni, la ricostruzione complessiva di quelle vicende ha visto il più delle volte il prevalere di un approccio storico-diplomatico che ha reso difficile comprendere con chiarezza i rapporti e le interdipendenze fra contesto locale, contesto nazionale e contesto internazionale. Attraverso la lettura incrociata dell’ampia documentazione proveniente dai fondi dei National Archives Records Administration (NARA) questo studio tenta una rilettura delle varie fasi di sviluppo della questione nel periodo compreso tra il giugno del 1945 e l’ottobre del 1954 secondo una duplice prospettiva: nella prima parte si concentra sulla politica americana a Trieste, guardando nello specifico a due aspetti interni tra loro strettamente correlati, la gestione dell’ordine pubblico e la “strategia” del consenso da realizzarsi mediante il controllo dell’informazione da un lato e la promozione di una politica culturale dall’altro. Sono aspetti entrambi riconducibili al modello del direct rule, che conferiva al governo militare alleato (GMA) piena ed esclusiva autorità di governo sulla zona A della Venezia Giulia, e che ci appaiono centrali anche per cogliere l’interazione fra istituzioni e soggetti sociali. Nella seconda parte, invece, il modificarsi della fonte d’archivio indica un cambiamento di priorità nella politica estera americana relativa a Trieste: a margine dei negoziati internazionali, i documenti del fondo Clare Boothe Luce nelle carte dell’Ambasciata mostrano soprattutto come la questione di Trieste venne proiettata verso l’esterno, verso l’Italia in particolare, e sfruttata – principalmente dall’ambasciatrice – nell’ottica bipolare della guerra fredda per rinforzare il sostegno interno alla politica atlantica. Il saggio, dunque, si sviluppa lungo due linee: dentro e fuori Trieste, dentro 1945-1952, fuori 1953-1954, perché dalle fonti consultate sono queste ad emergere come aree di priorità nei due periodi. Abstract - English The “Trieste question”, or the question regarding the Italian - Yugoslav border after the Second World War, has been the object of careful examination in both Italian and foreign historiography for a long time. With a few important exceptions, the overall reconstruction of these events has been based for the most part on historic and diplomatic approaches, which have sometimes made it rather difficult to understand clearly the relationships and interdependences at play between local, national and international contexts. Through a comparative analysis of a large body of documents from the National Archives and Records Administration (NARA), College Park MD, this essay attempts a second reading of the various phases in which the question developed between June 1945 and October 1954, following a twofold perspective: the first part focuses on American policy for Trieste, specifically looking at two internal and closely linked aspects, on the one hand, the management of ‘law and order’, as well as a ‘strategy’ of consent, to be achieved through the control of all the means of information , and, on the other, the promotion of a cultural policy. Both aspects can be traced back to the ‘direct rule’ model, which gave the Allied Military Government (AMG) full and exclusive governing authority over Venezia Giulia’s Zone A. These issues are also fundamental to a better understanding of the relationships between institutions and social subjects. In the second part of the essay , the change in archival sources clearly indicates a new set of priorities in American foreign policy regarding Trieste: outside any international negotiations for the settlement of the question, the Clare Boothe Luce papers held in the Embassy’s archives, show how the Trieste question was focused on external concerns, Italy in particular, and exploited – above all by the ambassador – within the bi-polar optic of the Cold War, in order to strengthen internal support for Atlantic policies. The essay therefore follows two main lines of inquiry: within and outside Trieste, within in 1945-1952, and outside 1953-1954, since, from the archival sources used, these emerge as priority areas in the two periods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The emergence and spread of infectious diseases reflects the interaction of ecological and economic factors within an adaptive complex system. We review studies that address the role of economic factors in the emergence and spread of infectious diseases and identify three broad themes. First, the process of macro-economic growth leads to environmental encroaching, which is related to the emergence of infectious diseases. Second, there are a number of mutually reinforcing processes associated with the emergence/spread of infectious diseases. For example, the emergence and spread of infectious diseases can cause significant economic damages, which in turn may create the conditions for further disease spread. Also, the existence of a mutually reinforcing relationship between global trade and macroeconomic growth amplifies the emergence/spread of infectious diseases. Third, microeconomic approaches to infectious disease point to the adaptivity of human behavior, which simultaneously shapes the course of epidemics and responds to it. Most of the applied research has been focused on the first two aspects, and to a lesser extent on the third aspect. With respect to the latter, there is a lack of empirical research aimed at characterizing the behavioral component following a disease outbreak. Future research should seek to fill this gap and develop hierarchical econometric models capable of integrating both macro and micro-economic processes into disease ecology.