904 resultados para SOLUTION-PHASE APPROACH
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.
Resumo:
This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.
Resumo:
The topics I came across during the period I spent as a Ph.D. student are mainly two. The first concerns new organocatalytic protocols for Mannich-type reactions mediated by Cinchona alkaloids derivatives (Scheme I, left); the second topic, instead, regards the study of a new approach towards the enantioselective total synthesis of Aspirochlorine, a potent gliotoxin that recent studies indicate as a highly selective and active agent against fungi (Scheme I, right). At the beginning of 2005 I had the chance to join the group of Prof. Alfredo Ricci at the Department of Organic Chemistry of the University of Bologna, starting my PhD studies. During the first period I started to study a new homogeneous organocatalytic aza-Henry reaction by means of Cinchona alkaloid derivatives as chiral base catalysts with good results. Soon after we introduced a new protocol which allowed the in situ synthesis of N-carbamoyl imines, scarcely stable, moisture sensitive compounds. For this purpose we used α-amido sulfones, bench stable white crystalline solids, as imine precursors (Scheme II). In particular we were able to obtain the aza-Henry adducts, by using chiral phase transfer catalysis, with a broad range of substituents as R-group and excellent results, unprecedented for Mannich-type transformations (Scheme II). With the optimised protocol in hand we have extended the methodology to the other Mannich-type reactions. We applied the new method to the Mannich, Strecker and Pudovik (hydrophosphonylation of imines) reactions with very good results in terms of enantioselections and yields, broadening the usefulness of this novel protocol. The Mannich reaction was certainly the most extensively studied work in this thesis (Scheme III). Initially we developed the reaction with α-amido sulfones as imine precursors and non-commercially available malonates with excellent results in terms of yields and enantioselections.3 In this particular case we recorded 1 mol% of catalyst loading, very low for organocatalytic processes. Then we thought to develop a new Mannich reaction by using simpler malonates, such as dimethyl malonate.4 With new optimised condition the reaction provided slightly lower enantioselections than the previous protocol, but the Mannich adducts were very versatile for the obtainment of β3-amino acids. Furthermore we performed the first addition of cyclic β-ketoester to α-amido sulfones obtaining the corresponding products in good yield with high level of diastereomeric and enantiomeric excess (Scheme III). Further studies were done about the Strecker reaction mediated by Cinchona alkaloid phase-transfer quaternary ammonium salt derivatives, using acetone cyanohydrin, a relatively harmless cyanide source (Scheme IV). The reaction proceeded very well providing the corresponding α-amino nitriles in good yields and enantiomeric excesses. Finally, we developed two new complementary methodologies for the hydrophosphonylation of imines (Scheme V). As a result of the low stability of the products derived from aromatic imines, we performed the reactions in mild homogeneous basic condition by using quinine as a chiral base catalyst giving the α-aryl-α-amido phosphonic acid esters as products (Scheme V, top).6 On the other hand, we performed the addition of dialkyl phosphite to aliphatic imines by using chiral Cinchona alkaloid phase transfer quaternary ammonium salt derivatives using our methodology based on α-amido sulfones (Scheme V, bottom). The results were good for both procedures covering a broad range of α-amino phosphonic acid ester. During the second year Ph.D. studies, I spent six months in the group of Prof. Steven V. Ley, at the Department of Chemistry of the University of Cambridge, in United Kingdom. During this fruitful period I have been involved in a project concerning the enantioselective synthesis of Aspirochlorine. We provided a new route for the synthesis of a key intermediate, reducing the number of steps and increasing the overall yield. Then we introduced a new enantioselective spirocyclisation for the synthesis of a chiral building block for the completion of the synthesis (Scheme VI).
Resumo:
Negli ultimi anni, un crescente numero di studiosi ha focalizzato la propria attenzione sullo sviluppo di strategie che permettessero di caratterizzare le proprietà ADMET dei farmaci in via di sviluppo, il più rapidamente possibile. Questa tendenza origina dalla consapevolezza che circa la metà dei farmaci in via di sviluppo non viene commercializzato perché ha carenze nelle caratteristiche ADME, e che almeno la metà delle molecole che riescono ad essere commercializzate, hanno comunque qualche problema tossicologico o ADME [1]. Infatti, poco importa quanto una molecola possa essere attiva o specifica: perché possa diventare farmaco è necessario che venga ben assorbita, distribuita nell’organismo, metabolizzata non troppo rapidamente, ne troppo lentamente e completamente eliminata. Inoltre la molecola e i suoi metaboliti non dovrebbero essere tossici per l’organismo. Quindi è chiaro come una rapida determinazione dei parametri ADMET in fasi precoci dello sviluppo del farmaco, consenta di risparmiare tempo e denaro, permettendo di selezionare da subito i composti più promettenti e di lasciar perdere quelli con caratteristiche negative. Questa tesi si colloca in questo contesto, e mostra l’applicazione di una tecnica semplice, la biocromatografia, per caratterizzare rapidamente il legame di librerie di composti alla sieroalbumina umana (HSA). Inoltre mostra l’utilizzo di un’altra tecnica indipendente, il dicroismo circolare, che permette di studiare gli stessi sistemi farmaco-proteina, in soluzione, dando informazioni supplementari riguardo alla stereochimica del processo di legame. La HSA è la proteina più abbondante presente nel sangue. Questa proteina funziona da carrier per un gran numero di molecole, sia endogene, come ad esempio bilirubina, tiroxina, ormoni steroidei, acidi grassi, che xenobiotici. Inoltre aumenta la solubilità di molecole lipofile poco solubili in ambiente acquoso, come ad esempio i tassani. Il legame alla HSA è generalmente stereoselettivo e ad avviene a livello di siti di legame ad alta affinità. Inoltre è ben noto che la competizione tra farmaci o tra un farmaco e metaboliti endogeni, possa variare in maniera significativa la loro frazione libera, modificandone l’attività e la tossicità. Per queste sue proprietà la HSA può influenzare sia le proprietà farmacocinetiche che farmacodinamiche dei farmaci. Non è inusuale che un intero progetto di sviluppo di un farmaco possa venire abbandonato a causa di un’affinità troppo elevata alla HSA, o a un tempo di emivita troppo corto, o a una scarsa distribuzione dovuta ad un debole legame alla HSA. Dal punto di vista farmacocinetico, quindi, la HSA è la proteina di trasporto del plasma più importante. Un gran numero di pubblicazioni dimostra l’affidabilità della tecnica biocromatografica nello studio dei fenomeni di bioriconoscimento tra proteine e piccole molecole [2-6]. Il mio lavoro si è focalizzato principalmente sull’uso della biocromatografia come metodo per valutare le caratteristiche di legame di alcune serie di composti di interesse farmaceutico alla HSA, e sul miglioramento di tale tecnica. Per ottenere una miglior comprensione dei meccanismi di legame delle molecole studiate, gli stessi sistemi farmaco-HSA sono stati studiati anche con il dicroismo circolare (CD). Inizialmente, la HSA è stata immobilizzata su una colonna di silice epossidica impaccata 50 x 4.6 mm di diametro interno, utilizzando una procedura precedentemente riportata in letteratura [7], con alcune piccole modifiche. In breve, l’immobilizzazione è stata effettuata ponendo a ricircolo, attraverso una colonna precedentemente impaccata, una soluzione di HSA in determinate condizioni di pH e forza ionica. La colonna è stata quindi caratterizzata per quanto riguarda la quantità di proteina correttamente immobilizzata, attraverso l’analisi frontale di L-triptofano [8]. Di seguito, sono stati iniettati in colonna alcune soluzioni raceme di molecole note legare la HSA in maniera enantioselettiva, per controllare che la procedura di immobilizzazione non avesse modificato le proprietà di legame della proteina. Dopo essere stata caratterizzata, la colonna è stata utilizzata per determinare la percentuale di legame di una piccola serie di inibitori della proteasi HIV (IPs), e per individuarne il sito(i) di legame. La percentuale di legame è stata calcolata attraverso il fattore di capacità (k) dei campioni. Questo parametro in fase acquosa è stato estrapolato linearmente dal grafico log k contro la percentuale (v/v) di 1-propanolo presente nella fase mobile. Solamente per due dei cinque composti analizzati è stato possibile misurare direttamente il valore di k in assenza di solvente organico. Tutti gli IPs analizzati hanno mostrato un’elevata percentuale di legame alla HSA: in particolare, il valore per ritonavir, lopinavir e saquinavir è risultato maggiore del 95%. Questi risultati sono in accordo con dati presenti in letteratura, ottenuti attraverso il biosensore ottico [9]. Inoltre, questi risultati sono coerenti con la significativa riduzione di attività inibitoria di questi composti osservata in presenza di HSA. Questa riduzione sembra essere maggiore per i composti che legano maggiormente la proteina [10]. Successivamente sono stati eseguiti degli studi di competizione tramite cromatografia zonale. Questo metodo prevede di utilizzare una soluzione a concentrazione nota di un competitore come fase mobile, mentre piccole quantità di analita vengono iniettate nella colonna funzionalizzata con HSA. I competitori sono stati selezionati in base al loro legame selettivo ad uno dei principali siti di legame sulla proteina. In particolare, sono stati utilizzati salicilato di sodio, ibuprofene e valproato di sodio come marker dei siti I, II e sito della bilirubina, rispettivamente. Questi studi hanno mostrato un legame indipendente dei PIs ai siti I e II, mentre è stata osservata una debole anticooperatività per il sito della bilirubina. Lo stesso sistema farmaco-proteina è stato infine investigato in soluzione attraverso l’uso del dicroismo circolare. In particolare, è stato monitorata la variazione del segnale CD indotto di un complesso equimolare [HSA]/[bilirubina], a seguito dell’aggiunta di aliquote di ritonavir, scelto come rappresentante della serie. I risultati confermano la lieve anticooperatività per il sito della bilirubina osservato precedentemente negli studi biocromatografici. Successivamente, lo stesso protocollo descritto precedentemente è stato applicato a una colonna di silice epossidica monolitica 50 x 4.6 mm, per valutare l’affidabilità del supporto monolitico per applicazioni biocromatografiche. Il supporto monolitico monolitico ha mostrato buone caratteristiche cromatografiche in termini di contropressione, efficienza e stabilità, oltre che affidabilità nella determinazione dei parametri di legame alla HSA. Questa colonna è stata utilizzata per la determinazione della percentuale di legame alla HSA di una serie di poliamminochinoni sviluppati nell’ambito di una ricerca sulla malattia di Alzheimer. Tutti i composti hanno mostrato una percentuale di legame superiore al 95%. Inoltre, è stata osservata una correlazione tra percentuale di legame è caratteristiche della catena laterale (lunghezza e numero di gruppi amminici). Successivamente sono stati effettuati studi di competizione dei composti in esame tramite il dicroismo circolare in cui è stato evidenziato un effetto anticooperativo dei poliamminochinoni ai siti I e II, mentre rispetto al sito della bilirubina il legame si è dimostrato indipendente. Le conoscenze acquisite con il supporto monolitico precedentemente descritto, sono state applicate a una colonna di silice epossidica più corta (10 x 4.6 mm). Il metodo di determinazione della percentuale di legame utilizzato negli studi precedenti si basa su dati ottenuti con più esperimenti, quindi è necessario molto tempo prima di ottenere il dato finale. L’uso di una colonna più corta permette di ridurre i tempi di ritenzione degli analiti, per cui la determinazione della percentuale di legame alla HSA diventa molto più rapida. Si passa quindi da una analisi a medio rendimento a una analisi di screening ad alto rendimento (highthroughput- screening, HTS). Inoltre, la riduzione dei tempi di analisi, permette di evitare l’uso di soventi organici nella fase mobile. Dopo aver caratterizzato la colonna da 10 mm con lo stesso metodo precedentemente descritto per le altre colonne, sono stati iniettati una serie di standard variando il flusso della fase mobile, per valutare la possibilità di utilizzare flussi elevati. La colonna è stata quindi impiegata per stimare la percentuale di legame di una serie di molecole con differenti caratteristiche chimiche. Successivamente è stata valutata la possibilità di utilizzare una colonna così corta, anche per studi di competizione, ed è stata indagato il legame di una serie di composti al sito I. Infine è stata effettuata una valutazione della stabilità della colonna in seguito ad un uso estensivo. L’uso di supporti cromatografici funzionalizzati con albumine di diversa origine (ratto, cane, guinea pig, hamster, topo, coniglio), può essere proposto come applicazione futura di queste colonne HTS. Infatti, la possibilità di ottenere informazioni del legame dei farmaci in via di sviluppo alle diverse albumine, permetterebbe un migliore paragone tra i dati ottenuti tramite esperimenti in vitro e i dati ottenuti con esperimenti sull’animale, facilitando la successiva estrapolazione all’uomo, con la velocità di un metodo HTS. Inoltre, verrebbe ridotto anche il numero di animali utilizzati nelle sperimentazioni. Alcuni lavori presenti in letteratura dimostrano l’affidabilita di colonne funzionalizzate con albumine di diversa origine [11-13]: l’utilizzo di colonne più corte potrebbe aumentarne le applicazioni.
Resumo:
Supramolecular architectures can be built-up from a single molecular component (building block) to obtain a complex of organic or inorganic interactions creating a new emergent condensed phase of matter, such as gels, liquid crystals and solid crystal. Further the generation of multicomponent supramolecular hybrid architecture, a mix of organic and inorganic components, increases the complexity of the condensed aggregate with functional properties useful for important areas of research, like material science, medicine and nanotechnology. One may design a molecule storing a recognition pattern and programming a informed self-organization process enables to grow-up into a hierarchical architecture. From a molecular level to a supramolecular level, in a bottom-up fashion, it is possible to create a new emergent structure-function, where the system, as a whole, is open to its own environment to exchange energy, matter and information. “The emergent property of the whole assembly is superior to the sum of a singles parts”. In this thesis I present new architectures and functional materials built through the selfassembly of guanosine, in the absence or in the presence of a cation, in solution and on the surface. By appropriate manipulation of intermolecular non-covalent interactions the spatial (structural) and temporal (dynamic) features of these supramolecular architectures are controlled. Guanosine G7 (5',3'-di-decanoil-deoxi-guanosine) is able to interconvert reversibly between a supramolecular polymer and a discrete octameric species by dynamic cation binding and release. Guanosine G16 (2',3'-O-Isopropylidene-5'-O-decylguanosine) shows selectivity binding from a mix of different cation's nature. Remarkably, reversibility, selectivity, adaptability and serendipity are mutual features to appreciate the creativity of a molecular self-organization complex system into a multilevelscale hierarchical growth. The creativity - in general sense, the creation of a new thing, a new thinking, a new functionality or a new structure - emerges from a contamination process of different disciplines such as biology, chemistry, physics, architecture, design, philosophy and science of complexity.
Resumo:
Chemists have long sought to extrapolate the power of biological catalysis and recognition to synthetic systems. These efforts have focused largely on low molecular weight catalysts and receptors; however, biological systems themselves rely almost exclusively on polymers, proteins and RNA, to perform complex chemical functions. Proteins and RNA are unique in their ability to adopt compact, well-ordered conformations, and specific folding provides precise spatial orientation of the functional groups that comprise the “active site”. These features suggest that identification of new polymer backbones with discrete and predictable folding propensities (“foldamers”) will provide a basis for design of molecular machines with unique capabilities. The foldamer approach complements current efforts to design unnatural properties into polypeptides and polynucleotides. The aim of this thesis is the synthesis and conformational studies of new classes of foldamers, using a peptidomimetic approach. Moreover their attitude to be utilized as ionophores, catalysts, and nanobiomaterials were analyzed in solution and in the solid state. This thesis is divided in thematically chapters that are reported below. It begins with a very general introduction (page 4) which is useful, but not strictly necessary, to the expert reader. It is worth mentioning that paragraph I.3 (page 22) is the starting point of this work and paragraph I.5 (page 32) isrequired to better understand the results of chapters 4 and 5. In chapter 1 (page 39) is reported the synthesis and conformational analysis of a novel class of foldamers containing (S)-β3-homophenylglycine [(S)-β3-hPhg] and D- 4-carboxy-oxazolidin-2-one (D-Oxd) residues in alternate order is reported. The experimental conformational analysis performed in solution by IR, 1HNMR, and CD spectroscopy unambiguously proved that these oligomers fold into ordered structures with increasing sequence length. Theoretical calculations employing ab initio MO theory suggest a helix with 11-membered hydrogenbonded rings as the preferred secondary structure type. The novel structures enrich the field of peptidic foldamers and might be useful in the mimicry of native peptides. In chapter 2 cyclo-(L-Ala-D-Oxd)3 and cyclo-(L-Ala-DOxd) 4 were prepared in the liquid phase with good overall yields and were utilized for bivalent ions chelation (Ca2+, Mg2+, Cu2+, Zn2+ and Hg2+); their chelation skill was analyzed with ESI-MS, CD and 1HNMR techniques and the best results were obtained with cyclo-(L-Ala-D-Oxd)3 and Mg2+ or Ca2+. Chapter 3 describes an application of oligopeptides as catalysts for aldol reactions. Paragraph 3.1 concerns the use of prolinamides as catalysts of the cross aldol addition of hydroxyacetone to aromatic aldeydes, whereas paragraphs 3.2 and 3.3 are about the catalyzed aldol addition of acetone to isatins. By means of DFT and AIM calculations, the steric and stereoelectronic effects that control the enantioselectivity in the cross-aldol addition of acetone to isatin catalysed by L-proline have been studied, also in the presence of small quantities of water. In chapter 4 is reported the synthesis and the analysis of a new fiber-like material, obtained from the selfaggregation of the dipeptide Boc-L-Phe-D-Oxd-OBn, which spontaneously forms uniform fibers consisting of parallel infinite linear chains arising from singleintermolecular N-H···O=C hydrogen bonds. This is the absolute borderline case of a parallel β-sheet structure. Longer oligomers of the same series with general formula Boc-(L-Phe-D-Oxd)n-OBn (where n = 2-5), are described in chapter 5. Their properties in solution and in the solid state were analyzed, in correlation with their attitude to form intramolecular hydrogen bond. In chapter 6 is reported the synthesis of imidazolidin-2- one-4-carboxylate and (tetrahydro)-pyrimidin-2-one-5- carboxylate, via an efficient modification of the Hofmann rearrangement. The reaction affords the desired compounds from protected asparagine or glutamine in good to high yield, using PhI(OAc)2 as source of iodine(III).
Resumo:
The rational construction of the house. The writings and projects of Giuseppe Pagano Description, themes and research objectives The research aims at analysing the architecture of Giuseppe Pagano, which focuses on the theme of dwelling, through the reading of 3 of his house projects. On the one hand, these projects represent “minor” works not thoroughly known by Pagano’s contemporary critics; on the other they emphasise a particular methodological approach, which serves the author to explore a theme closely linked to his theoretical thought. The house project is a key to Pagano’s research, given its ties to the socio-cultural and political conditions in which the architect was working, so that it becomes a mirror of one of his specific and theoretical path, always in a state of becoming. Pagano understands architecture as a “servant of the human being”, subject to a “utilitarian slavery” since it is a clear, essential and “modest” answer to specific human needs, free from aprioristic aesthetic and formal choices. It is a rational architecture in sensu stricto; it constitutes a perfect synthesis between cause and effect and between function and form. The house needs to accommodate these principles because it is closely intertwined with human needs and intimately linked to a specific place, climatic conditions and technical and economical possibilities. Besides, differently from his public and common masterpieces such as the Palazzo Gualino, the Istituto di Fisica and the Università Commerciale Bocconi, the house projects are representative of a precise project will, which is expressed in a more authentic way, partially freed from political influences and dogmatic preoccupations and, therefore, far from the attempt to research a specific expressive language. I believe that the house project better represents that “ingenuity”, freshness and “sincerity” that Pagano identifies with the minor architecture, thereby revealing a more authentic expression of his understanding of a project. Therefore, the thesis, by tracing the theoretical research of Pagano through the analysis of some of his designed and built works, attempts to identify a specific methodological approach to Pagano’s project, which, developed through time, achieves a certain clarity in the 1930s. In fact, this methodological approach becomes more evident in his last projects, mainly regarding the house and the urban space. These reflect the attempt to respond to the new social needs and, at the same time, they also are an expression of a freer idea of built architecture, closely linked with the place and with the human being who dwells it. The three chosen projects (Villa Colli, La Casa a struttura d’acciaio and Villa Caraccio) make Pagano facing different places, different customers and different economic and technical conditions, which, given the author’s biography, correspond to important historical and political conditions. This is the reason why the projects become apparently distant works, both linguistically and conceptually, to the point that one can define them as ”eclectic”. However, I argue that this eclecticism is actually an added value to the architectural work of Pagano, steaming from the use of a method which, having as a basis the postulate of a rational architecture as essence and logic of building, finds specific variations depending on the multiple variables to be addressed by the project. This is the methodological heritage that Pagano learns from the tradition, especially that of the rural residential architecture, defined by Pagano as a “dictionary of the building logic of man”, as an “a-stylistic background”. For Pagano this traditional architecture is a clear expression of the relationships between a theme and its development, an architectural “fact” that is resolved with purely technical and utilitarian aims and with a spontaneous development far from any aprioristic theoretical principle. Architecture, therefore, cannot be an invention for Pagano and the personal contribution of each architect has to consider his/her close relationship with the specific historical context, place and new building methods. These are basic principles in the methodological approach that drives a great deal of his research and that also permits his thought to be modern. I argue that both ongoing and new collaborations with younger protagonists of the culture and architecture of the period are significant for the development of his methodology. These encounters represent the will to spread his own understanding of the “new architecture” as well as a way of self-renewal by confronting the self with new themes and realities and by learning from his collaborators. Thesis’ outline The thesis is divided in two principal parts, each articulated in four chapters attempting to offer a new reading of the theory and work of Pagano by emphasising the central themes of the research. The first chapter is an introduction to the thesis and to the theme of the rational house, as understood and developed in its typological and technical aspects by Pagano and by other protagonists of the Italian rationalism of the 1930s. Here the attention is on two different aspects defining, according to Pagano, the house project: on the one hand, the typological renewal, aimed at defining a “standard form” as a clear and essential answer to certain needs and variables of the project leading to different formal expressions. On the other, it focuses on the building, understood as a technique to “produce” architecture, where new technologies and new materials are not merely tools but also essential elements of the architectural work. In this way the villa becomes different from the theme of the common house or from that of the minimalist house, by using rules in the choice of material and in the techniques that are every time different depending on the theme under exploration and on the contingency of place. It is also visible the rigorous rationalism that distinguishes the author's appropriation of certain themes of rural architecture. The pages of “Casabella” and the events of the contemporary Triennali form the preliminary material for the writing of this chapter given that they are primary sources to individuate projects and writings produced by Pagano and contemporary architects on this theme. These writings and projects, when compared, reconstruct the evolution of the idea of the rational house and, specifically, of the personal research of Pagano. The second part regards the reading of three of Pagano’s projects of houses as a built verification of his theories. This section constitutes the central part of the thesis since it is aimed at detecting a specific methodological approach showing a theoretical and ideological evolution expressed in the vast edited literature. The three projects that have been chosen explore the theme of the house, looking at various research themes that the author proposes and that find continuity in the affirmation of a specific rationalism, focussed on concepts such as essentiality, utility, functionality and building honesty. These concepts guide the thought and the activities of Pagano, also reflecting a social and cultural period. The projects span from the theme of the villa moderna, Villa Colli, which, inspired by the architecture of North Europe, anticipates a specific rationalism of Pagano based on rigour, simplicity and essentiality, to the theme of the common house, Casa a struttura d’acciaio, la casa del domani, which ponders on the definition of new living spaces and, moreover, on new concepts of standardisation, economical efficiency and new materials responding to the changing needs of the modern society. Finally, the third project returns to the theme of the, Villa Caraccio, revisiting it with new perspectives. These perspectives find in the solution of the open plant, in the openness to nature and landscape and in the revisiting of materials and local building systems that idea of the freed house, which express clearly a new theoretical thought. Methodology It needs to be noted that due to the lack of an official Archive of Pagano’s work, the analysis of his work has been difficult and this explains the necessity to read the articles and the drawings published in the pages of «Casabella» and «Domus». As for the projects of Villa Colli and Casa a struttura d’acciaio, parts of the original drawings have been consulted. These drawings are not published and are kept in private archives of the collaborators of Pagano. The consultation of these documents has permitted the analysis of the cited works, which have been subject to a more complete reading following the different proposed solutions, which have permitted to understand the project path. The projects are analysed thought the method of comparison and critical reading which, specifically, means graphical elaborations and analytical schemes, mostly reconstructed on the basis of original projects but, where possible, also on a photographic investigation. The focus is on the project theme which, beginning with a specific living (dwelling) typology, finds variations because of the historico-political context in which Pagano is embedded and which partially shapes his research and theoretical thought, then translated in the built work. The analysis of the work follows, beginning, where possible, from a reconstruction of the evolution of the project as elaborated on the basis of the original documents and ending on an analysis of the constructive principles and composition. This second phase employs a methodology proposed by Pagano in his article Piante di ville, which, as expected, focuses on the plant as essential tool to identify the “true practical and poetic qualities of the construction”(Pagano, «Costruzioni-Casabella», 1940, p. 2). The reading of the project is integrated with the constructive analyses related to the technical aspects of the house which, in the case of Casa a struttura d’acciaio, play an important role in the project, while in Villa Colli and in Villa Caraccio are principally linked to the choice of materials for the construction of the different architectural elements. These are nonetheless key factors in the composition of the work. Future work could extend this reading to other house projects to deepen the research that could be completed with the consultation of Archival materials, which are missing at present. Finally, in the appendix I present a critical selection of the Pagano’s writings, which recall the themes discussed and embodied by the three projects. The texts have been selected among the articles published in Casabella and in other journals, completing the reading of the project work which cannot be detached from his theoretical thought. Moving from theory to project, we follow a path that brings us to define and deepen the central theme of the thesis: rational building as the principal feature of the architectural research of Pagano, which is paraphrased in multiple ways in his designed and built works.
Resumo:
A systematic characterization of the composition and structure of the bacterial cell-surface proteome and its complexes can provide an invaluable tool for its comprehensive understanding. The knowledge of protein complexes composition and structure could offer new, more effective targets for a more specific and consequently effective immune response against a complex instead of a single protein. Large-scale protein-protein interaction screens are the first step towards the identification of complexes and their attribution to specific pathways. Currently, several methods exist for identifying protein interactions and protein microarrays provide the most appealing alternative to existing techniques for a high throughput screening of protein-protein interactions in vitro under reasonably straightforward conditions. In this study approximately 100 proteins of Group A Streptococcus (GAS) predicted to be secreted or surface exposed by genomic and proteomic approaches were purified in a His-tagged form and used to generate protein microarrays on nitrocellulose-coated slides. To identify protein-protein interactions each purified protein was then labeled with biotin, hybridized to the microarray and interactions were detected with Cy3-labelled streptavidin. Only reciprocal interactions, i. e. binding of the same two interactors irrespective of which of the two partners is in solid-phase or in solution, were taken as bona fide protein-protein interactions. Using this approach, we have identified 20 interactors of one of the potent toxins secreted by GAS and known as superantigens. Several of these interactors belong to the molecular chaperone or protein folding catalyst families and presumably are involved in the secretion and folding of the superantigen. In addition, a very interesting interaction was found between the superantigen and the substrate binding subunit of a well characterized ABC transporter. This finding opens a new perspective on the current understanding of how superantigens are modified by the bacterial cell in order to become major players in causing disease.
Resumo:
Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.
Resumo:
The aim of this work is to put forward a statistical mechanics theory of social interaction, generalizing econometric discrete choice models. After showing the formal equivalence linking econometric multinomial logit models to equilibrium statical mechanics, a multi- population generalization of the Curie-Weiss model for ferromagnets is considered as a starting point in developing a model capable of describing sudden shifts in aggregate human behaviour. Existence of the thermodynamic limit for the model is shown by an asymptotic sub-additivity method and factorization of correlation functions is proved almost everywhere. The exact solution for the model is provided in the thermodynamical limit by nding converging upper and lower bounds for the system's pressure, and the solution is used to prove an analytic result regarding the number of possible equilibrium states of a two-population system. The work stresses the importance of linking regimes predicted by the model to real phenomena, and to this end it proposes two possible procedures to estimate the model's parameters starting from micro-level data. These are applied to three case studies based on census type data: though these studies are found to be ultimately inconclusive on an empirical level, considerations are drawn that encourage further refinements of the chosen modelling approach, to be considered in future work.
Resumo:
Objective of these four first chapters is to have a complete understanding of the supramolecular organisation of several complementary modules able to form 2-D networks first in solution using optical spectroscopy measurements as function of solvent polarity , concentration and temperature, and then on solid surface using microscopy techniques such as STM, AFM and TEM. The last chapter presents another type of supramolecular material for application in solar cells technology involving fullerenes and OPV systems. We describes the photoinduced energy and electron process using transient absorption experiments. All these systems provide an exceptional example for the potential of the supramolecular approach as an alternative to the restricted lithographic method for the fabrication of adressable molecular devices.
Resumo:
Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Heat treatment of steels is a process of fundamental importance in tailoring the properties of a material to the desired application; developing a model able to describe such process would allow to predict the microstructure obtained from the treatment and the consequent mechanical properties of the material. A steel, during a heat treatment, can undergo two different kinds of phase transitions [p.t.]: diffusive (second order p.t.) and displacive (first order p.t.); in this thesis, an attempt to describe both in a thermodynamically consistent framework is made; a phase field, diffuse interface model accounting for the coupling between thermal, chemical and mechanical effects is developed, and a way to overcome the difficulties arising from the treatment of the non-local effects (gradient terms) is proposed. The governing equations are the balance of linear momentum equation, the Cahn-Hilliard equation and the balance of internal energy equation. The model is completed with a suitable description of the free energy, from which constitutive relations are drawn. The equations are then cast in a variational form and different numerical techniques are used to deal with the principal features of the model: time-dependency, non-linearity and presence of high order spatial derivatives. Simulations are performed using DOLFIN, a C++ library for the automated solution of partial differential equations by means of the finite element method; results are shown for different test-cases. The analysis is reduced to a two dimensional setting, which is simpler than a three dimensional one, but still meaningful.