14 resultados para Hierarchy of beings

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique. We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy. We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours. Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with mof the rewriting rules, several dialects of the calculus can be obtained. We analyse the expressive power of some of these dialects by focusing on decidability and undecidability for problems like reachability and coverability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stem cells are one of the most fascinating areas of biology today, and since the discover of an adult population, i.e., adult Stem Cells (aSCs), they have generated much interest especially for their application potential as a source for cell based regenerative medicine and tissue engineering. aSCs have been found in different tissues including bone marrow, skin, intestine, central nervous system, where they reside in a special microenviroment termed “niche” which regulate the homeostasis and repair of adult tissues. The arterial wall of the blood vessels is much more plastic than ever before believed. Several animal studies have demonstrated the presence of cells with stem cell characteristics within the adult vessels. Recently, it has been also hypothesized the presence of a “vasculogenic zone” in human adult arteries in which a complete hierarchy of resident stem cells and progenitors could be niched during lifetime. Accordingly, it can be speculated that in that location resident mesenchymal stem cells (MSCs) with the ability to differentiate in smooth muscle cells, surrounding pericytes and fibroblasts are present. The present research was aimed at identifying in situ and isolating MSCs from thoracic aortas of young and healthy heart-beating multiorgan donors. Immunohistochemistry performed on fresh and frozen human thoracic aortas demonstrated the presence of the vasculogenic zone between the media and the adventitial layers in which a well preserved plexus of CD34 positive cells was found. These cells expressed intensely HLA-I antigens both before and after cryopreservation and after 4 days of organ cultures remained viable. Following these preliminary results, we succeeded to isolate mesenchymal cells from multi-organ thoracic aortas using a mechanical and enzymatic combined procedure. Cells had phenotypic characteristics of MSC i.e., CD44+, CD90+, CD105+, CD166+, CD34low, CD45- and revealed a transcript expression of stem cell markers, e.g., OCT4, c-kit, BCRP-1, IL6 and BMI-1. As previously documented using bone marrow derived MSCs, resident vascular wall MSCs were able to differentiate in vitro into endothelial cells in the presence of low-serum supplemented with VEGF-A (50 ng/ml) for 7 days. Under the condition described above, cultured cells showed an increased expression of KDR and eNOS, down-regulation of the CD133 transcript, vWF expression as documented by flow cytometry, immunofluorescence, qPCR and TEM. Moreover, matrigel assay revealed that VEGF induced cells were able to form capillary-like structures within 6 hours of seeding. In summary, these findings indicate that thoracic aortas from heart-beating, multi-organ donors are highly suitable for obtaining MSCs with the ability to differentiate in vitro into endothelial cells. Even though their differentiating potential remains to be fully established, it is believed that their angiogenic ability could be a useful property for allogenic use. These cells can be expanded rapidly, providing numbers which are adequate for therapeutic neovascularization; furthermore they can be cryostored in appropriate cell banking facilities for later use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Generic programming is likely to become a new challenge for a critical mass of developers. Therefore, it is crucial to refine the support for generic programming in mainstream Object-Oriented languages — both at the design and at the implementation level — as well as to suggest novel ways to exploit the additional degree of expressiveness made available by genericity. This study is meant to provide a contribution towards bringing Java genericity to a more mature stage with respect to mainstream programming practice, by increasing the effectiveness of its implementation, and by revealing its full expressive power in real world scenario. With respect to the current research setting, the main contribution of the thesis is twofold. First, we propose a revised implementation for Java generics that greatly increases the expressiveness of the Java platform by adding reification support for generic types. Secondly, we show how Java genericity can be leveraged in a real world case-study in the context of the multi-paradigm language integration. Several approaches have been proposed in order to overcome the lack of reification of generic types in the Java programming language. Existing approaches tackle the problem of reification of generic types by defining new translation techniques which would allow for a runtime representation of generics and wildcards. Unfortunately most approaches suffer from several problems: heterogeneous translations are known to be problematic when considering reification of generic methods and wildcards. On the other hand, more sophisticated techniques requiring changes in the Java runtime, supports reified generics through a true language extension (where clauses) so that backward compatibility is compromised. In this thesis we develop a sophisticated type-passing technique for addressing the problem of reification of generic types in the Java programming language; this approach — first pioneered by the so called EGO translator — is here turned into a full-blown solution which reifies generic types inside the Java Virtual Machine (JVM) itself, thus overcoming both performance penalties and compatibility issues of the original EGO translator. Java-Prolog integration Integrating Object-Oriented and declarative programming has been the subject of several researches and corresponding technologies. Such proposals come in two flavours, either attempting at joining the two paradigms, or simply providing an interface library for accessing Prolog declarative features from a mainstream Object-Oriented languages such as Java. Both solutions have however drawbacks: in the case of hybrid languages featuring both Object-Oriented and logic traits, such resulting language is typically too complex, thus making mainstream application development an harder task; in the case of library-based integration approaches there is no true language integration, and some “boilerplate code” has to be implemented to fix the paradigm mismatch. In this thesis we develop a framework called PatJ which promotes seamless exploitation of Prolog programming in Java. A sophisticated usage of generics/wildcards allows to define a precise mapping between Object-Oriented and declarative features. PatJ defines a hierarchy of classes where the bidirectional semantics of Prolog terms is modelled directly at the level of the Java generic type-system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il presente lavoro ha lo scopo di presentare gli studi e i risultati ottenuti durante l’attività di ricerca svolta sul Displacement-based Assessment (DBA) dei telai in cemento armato. Dopo alcune considerazioni iniziali sul tema della vulnerabilità sismica e sui metodi di analisi e verifica, si procede alla descrizione teorica del metodo. Sono stati analizzati tre casi studio di telai piani, progettati per soli carichi verticali e secondo normative non più in vigore che non prevedevano l’applicazione della gerarchia delle resistenze. I telai considerati, destinati ad abitazione civile, hanno diversa altezza e numero di piani, e diverso numero di campate. Si è proceduto all’applicazione del metodo, alla valutazione della vulnerabilità sismica in base alla domanda in termini di spostamento costituita da uno spettro elastico previsto dall’EC8 e alla validazione dei risultati ottenuti mediante analisi non lineari statiche e dinamiche e mediante l’applicazione dei teoremi dell’Analisi limite dei telai, proposta come procedura alternativa per la determinazione del meccanismo anelastico e della capacità in termini di taglio alla base. In ultimo si è applicata la procedura DBA per la valutazione della vulnerabilità sismica di un edificio scolastico, realizzato tra il 1969 e il 1975 in un sito caratterizzato da una accelerazione di picco orizzontale pari a 0,24g e una probabilità di superamento del 10% in 75 anni.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tesi analizza i rapporti tra l’ordinamento italiano e la Cedu, in particolare la collocazione della Cedu all’interno del sistema delle fonti alla luce della modifica dell’art. 117, comma 1 Cost. Si tratta di un tema molto dibattuto in dottrina, specialmente a seguito dell’entrata in vigore del Trattato di Lisbona. Questa tematica risulta strettamente connessa al profilo dell’interazione tra la Corte di Strasburgo e la Corte costituzionale e i giudici ordinari. L’analisi del profilo statico concernente lo status della Cedu nel sistema italiano deve quindi essere accompagnata dall’esame del profilo dinamico, relativo al ruolo della giurisprudenza della Corte di Strasburgo nell’esperienza dell’ordinamento nazionale. Entrambi i profili di indagine sono esaminati alla luce delle indicazioni provenienti dalla giurisprudenza della Corte costituzionale, della Corte di Cassazione e della Corte di Strasburgo. Prima di essere esaminate singolarmente, queste tematiche richiedono la preliminare ricognizione dei termini della dicotomia tra i due modelli concettuali di riferimento in tema di rapporti interordinamentali: il monismo e il dualismo. Trasferite nel peculiare contesto del sistema Cedu, tali categorie dogmatiche si arricchiscono di ulteriori profili, che esorbitano dalla sistemazione del rapporto tra fonti. La tenuta dei due paradigmi concettuali, che sono nati ed operano nel contesto della teorica delle fonti, deve essere verificata anche rispetto all’attuale fenomeno della produzione europea di diritto giurisprudenziale ed alla capacità paradigmatica assunta dalla giurisprudenza di Strasburgo. Il diritto e le istituzioni giuridiche tendono ad assumere sempre più sembianze giurisdizionali, generando un’osmosi che porta a trasferire il focus dai rapporti interordinamentali ai rapporti tra giurisprudenze.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The miniaturization race in the hardware industry aiming at continuous increasing of transistor density on a die does not bring respective application performance improvements any more. One of the most promising alternatives is to exploit a heterogeneous nature of common applications in hardware. Supported by reconfigurable computation, which has already proved its efficiency in accelerating data intensive applications, this concept promises a breakthrough in contemporary technology development. Memory organization in such heterogeneous reconfigurable architectures becomes very critical. Two primary aspects introduce a sophisticated trade-off. On the one hand, a memory subsystem should provide well organized distributed data structure and guarantee the required data bandwidth. On the other hand, it should hide the heterogeneous hardware structure from the end-user, in order to support feasible high-level programmability of the system. This thesis work explores the heterogeneous reconfigurable hardware architectures and presents possible solutions to cope the problem of memory organization and data structure. By the example of the MORPHEUS heterogeneous platform, the discussion follows the complete design cycle, starting from decision making and justification, until hardware realization. Particular emphasis is made on the methods to support high system performance, meet application requirements, and provide a user-friendly programmer interface. As a result, the research introduces a complete heterogeneous platform enhanced with a hierarchical memory organization, which copes with its task by means of separating computation from communication, providing reconfigurable engines with computation and configuration data, and unification of heterogeneous computational devices using local storage buffers. It is distinguished from the related solutions by distributed data-flow organization, specifically engineered mechanisms to operate with data on local domains, particular communication infrastructure based on Network-on-Chip, and thorough methods to prevent computation and communication stalls. In addition, a novel advanced technique to accelerate memory access was developed and implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterocyclic compounds represent almost two-thirds of all the known organic compounds: they are widely distributed in nature and play a key role in a huge number of biologically important molecules including some of the most significant for human beings. A powerful tool for the synthesis of such compounds is the hetero Diels-Alder reaction (HDA), that involve a [4+2] cycloaddition reaction between heterodienes and suitable dienophiles. Among heterodienes to be used in such six-membered heterocyclic construction strategy, 3-trialkylsilyloxy-2-aza-1,3-dienes (Fig 1) has been demonstrated particularly attractive. In this thesis work, HDA reactions between 2-azadienes and carbonylic and/or olefinic dienophiles, are described. Moreover, substitution of conventional heating by the corresponding dielectric heating as been explored in the frame of Microwave-Assisted-Organic-Synthesis (MAOS) which constitutes an up-to-grade research field of great interest both from an academic and industrial point of view. Reaction of the azadiene 1 (Fig 1) will be described using as dienophiles carbonyl compounds as aldehyde and ketones. The six-membered adducts thus obtained (Scheme 1) have been elaborated to biologically active compounds like 1,3-aminols which constitutes the scaffold for a wide range of drugs (Prozac®, Duloxetine, Venlafaxine) with large applications in the treatment of severe diseases of nervous central system (NCS). Scheme 1 The reaction provides the formation of three new stereogenic centres (C-2; C-5; C-6). The diastereoselective outcome of these reactions has been deeply investigated by the use of various combination of achiral and chiral azadienes and aliphatic, aromatic or heteroaromatic aldehydes. The same approach, basically, has been used in the synthesis of piperidin-2-one scaffold substituting the carbonyl dienophile with an electron poor olefin. Scheme 2 As a matter of fact, this scaffold is present in a very large number of natural substances and, more interesting, is a required scaffold for an huge variety of biologically active compounds. Activated olefins bearing one or two sulfone groups, were choose as dienophiles both for the intrinsic characteristic flexibility of the “sulfone group” which may be easily removed or elaborated to more complex decorations of the heterocyclic ring, and for the electron poor property of this dienophiles which makes the resulting HDA reaction of the type “normal electron demand”. Synthesis of natural compounds like racemic (±)-Anabasine (alkaloid of Tobacco’s leaves) and (R)- and (S)-Conhydrine (alkaloid of Conium Maculatum’s seeds and leaves) and its congeners, are described (Fig 2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Different tools have been used to set up and adopt the model for the fulfillment of the objective of this research. 1. The Model The base model that has been used is the Analytical Hierarchy Process (AHP) adapted with the aim to perform a Benefit Cost Analysis. The AHP developed by Thomas Saaty is a multicriteria decision - making technique which decomposes a complex problem into a hierarchy. It is used to derive ratio scales from both discreet and continuous paired comparisons in multilevel hierarchic structures. These comparisons may be taken from actual measurements or from a fundamental scale that reflects the relative strength of preferences and feelings. 2. Tools and methods 2.1. The Expert Choice Software The software Expert Choice is a tool that allows each operator to easily implement the AHP model in every stage of the problem. 2.2. Personal Interviews to the farms For this research, the farms of the region Emilia Romagna certified EMAS have been detected. Information has been given by EMAS center in Wien. Personal interviews have been carried out to each farm in order to have a complete and realistic judgment of each criteria of the hierarchy. 2.3. Questionnaire A supporting questionnaire has also been delivered and used for the interviews . 3. Elaboration of the data After data collection, the data elaboration has taken place. The software support Expert Choice has been used . 4. Results of the Analysis The result of the figures above (vedere altro documento) gives a series of numbers which are fractions of the unit. This has to be interpreted as the relative contribution of each element to the fulfillment of the relative objective. So calculating the Benefits/costs ratio for each alternative the following will be obtained: Alternative One: Implement EMAS Benefits ratio: 0, 877 Costs ratio: 0, 815 Benfit/Cost ratio: 0,877/0,815=1,08 Alternative Two: Not Implement EMAS Benefits ratio: 0,123 Costs ration: 0,185 Benefit/Cost ratio: 0,123/0,185=0,66 As stated above, the alternative with the highest ratio will be the best solution for the organization. This means that the research carried out and the model implemented suggests that EMAS adoption in the agricultural sector is the best alternative. It has to be noted that the ratio is 1,08 which is a relatively low positive value. This shows the fragility of this conclusion and suggests a careful exam of the benefits and costs for each farm before adopting the scheme. On the other part, the result needs to be taken in consideration by the policy makers in order to enhance their intervention regarding the scheme adoption on the agricultural sector. According to the AHP elaboration of judgments we have the following main considerations on Benefits: - Legal compliance seems to be the most important benefit for the agricultural sector since its rank is 0,471 - The next two most important benefits are Improved internal organization (ranking 0,230) followed by Competitive advantage (ranking 0, 221) mostly due to the sub-element Improved image (ranking 0,743) Finally, even though Incentives are not ranked among the most important elements, the financial ones seem to have been decisive on the decision making process. According to the AHP elaboration of judgments we have the following main considerations on Costs: - External costs seem to be largely more important than the internal ones (ranking 0, 857 over 0,143) suggesting that Emas costs over consultancy and verification remain the biggest obstacle. - The implementation of the EMS is the most challenging element regarding the internal costs (ranking 0,750).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although bacteria represent the simplest form of life on Earth, they have a great impact on all living beings. For example the degrader bacterium Pseudomonas pseudoalcaligenes KF707 is used in bioremediation procedures for the recovery of polluted sites. Indeed, KF707 strain is know for its ability to degrade biphenyl and polychlorinated biphenyls - to which is chemotactically attracted - and to tolerate the oxydative stress due to toxic metal oxyanions such as tellurite and selenite. Moreover, in bioremediation processes, target compounds can be easily accessible to KF707 through biofilm formation. All these considerations suggest that KF707 is such a unique microorganism and this Thesis work has been focused on determining the molecular nature of some of the peculiar physiological traits of this strain. The genome project provided a large set of informations: putative genes involved in the degradation of aromatic and toxic compounds and associated to stress response were identified. Notably, multiple chemotactic operons and cheA genes were also found. Deleted mutants in the cheA genes were constructed and their role in motility, chemotaxis and biofilm formation were assessed and compared to those previously attributed to a cheA1 gene in a KF707 mutant constructed by a mini-Tn5 transposon insertion and which was impaired in motility and biofilm development. The results of this present Thesis work, taken together, were interpreted to suggest that in Pseudomonas pseudoalcaligenes KF707 strain, multiple factors are involved in these networks and they might play different roles depending on the environmental conditions. The ability of KF707 strain to produce signal molecules possibly involved in cell-to-cell communication, was also investigated: lack of a lux-like QS system - which is conversely widely present in Gram negative bacteria – keeps open the question about the actual molecular nature of KF707 quorum sensing mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present dissertation focuses on the dual number in Ancient Greek in a diachronical lapse stretching from the Mycenaean age to the Attic Drama and Comedy of the 5th century BC. In the first chapter morphological issues are addressed, chiefly in a comparative perspective. The Indo European evidence on the dual is hence gathered in order to sketch patterns of grammaticalisation and paradigmatisation of specific grams, growing increasingly functional within the Greek domain. In the second chapter syntactical problems are tackled. After a survey of scholarly literature on the Greek dual, we engage in a functional and typological approach, in order to disentangle some biased assessments on the dual, namely its alleged lack of regularity and intermittent agreement. Some recent frameworks in General Linguistics provide useful grounds for casting new light on the subject. Internal Reconstruction, for instance, supports the facultativity of the dual in each and every stage of its development; Typology and the Animacy Hierarcy add precious cross linguistical insight on the behaviour of the dual toward agreement. Glaring differences also arise as to the adoption — or avoidance — of the dual by different authors. Idiolectal varieties prove in fact conditioned by stylistical and register necessity. By means of a comparison among Epics, Tragedy and Comedy it is possible to enhance differences in the evaluation of the dual, which led sometimes to forms of ‘censure’ — thus triggering the onset of competing strategies to express duality. The last two chapters delve into the tantalising variety of the Homeric evidence, first of all in an account of the notorious issue of the Embassy of Iliad IX, and last in a commentary of all significant Homeric duals — mostly represented by archaisms, formulae, and ad hoc coinages.