939 resultados para INDENTATION EXPERIMENTS
Resumo:
Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Das Experiment zur Überprüfung der Gerasimov-Drell-Hearn(GDH)-Summenregel am Mainzer Mikrotron diente zur Messunghelizitätsabhängiger Eigenschaften der Photoproduktion amProton. Hierbei konnten neben den totalen Photoabsorptionswirkungsquerschnitten auch Querschnitte partieller Kanäle der Einpion-, Zweipion- und Eta-Produktion bestimmt werden. Für die möglichen Doppel-Pion-Reaktionen sind zur Zeit nur wenig Information vorhanden. Daher sind die Reaktionsmechanismen der Doppel-Pion-Photoproduktion weitgehend noch unverstanden. Aus diesem Grund ist die Untersuchung weiterer Observablenerforderlich. Die Helizitätsabhängigkeit stellt eine solcheneue Observable dar und kann dadurch gemessen werden, daßein zirkular polarisiertes Photon mit einem longitudinalpolarisierten Nukleon wechselwirkt. Der Photonspin beträgt1 und der Fermionenspin des Nukleons beträgt 1/2.Die Spins koppeln zu 3/2 und 1/2. Dies liefert diehelizitätsabhängigen Wirkungsquerschnitte.Diese Dissertation beschreibt den Aufbau des Mainzer GDH-Experiments und die technische Realisation der Messung helizitätsabhängiger Photoabsorptionswirkungsquerschnitte. Die Helizitätsabhängigkeit der doppelt geladenen Doppel-Pion-Photoproduktion am Proton wurde anhand der gemessenen Daten untersucht. Diese Reaktion ist ein Zweistufenprozeß, wobei das angeregte Nukleon resonant oder nicht-resonant in die Endzustandsteilchen zerfällt. Durch die helizitätsabhängigen Daten wurden Freiheitsgradein der Doppel-Pion-Photoproduktion eingeschränkt, wodurch eine Verbesserung der Modelle der Gent-Mainz- und der Valencia-Gruppe ermöglicht wurde. Die im Rahmen dieser Arbeit ermittelten Daten haben zusätzlich eine Motivation für weitere Interpretationsansätze durch die genanntenTheoriegruppen geschaffen.
Resumo:
In a large number of problems the high dimensionality of the search space, the vast number of variables and the economical constrains limit the ability of classical techniques to reach the optimum of a function, known or unknown. In this thesis we investigate the possibility to combine approaches from advanced statistics and optimization algorithms in such a way to better explore the combinatorial search space and to increase the performance of the approaches. To this purpose we propose two methods: (i) Model Based Ant Colony Design and (ii) Naïve Bayes Ant Colony Optimization. We test the performance of the two proposed solutions on a simulation study and we apply the novel techniques on an appplication in the field of Enzyme Engineering and Design.
Resumo:
Computer simulations have become an important tool in physics. Especially systems in the solid state have been investigated extensively with the help of modern computational methods. This thesis focuses on the simulation of hydrogen-bonded systems, using quantum chemical methods combined with molecular dynamics (MD) simulations. MD simulations are carried out for investigating the energetics and structure of a system under conditions that include physical parameters such as temperature and pressure. Ab initio quantum chemical methods have proven to be capable of predicting spectroscopic quantities. The combination of these two features still represents a methodological challenge. Furthermore, conventional MD simulations consider the nuclei as classical particles. Not only motional effects, but also the quantum nature of the nuclei are expected to influence the properties of a molecular system. This work aims at a more realistic description of properties that are accessible via NMR experiments. With the help of the path integral formalism the quantum nature of the nuclei has been incorporated and its influence on the NMR parameters explored. The effect on both the NMR chemical shift and the Nuclear Quadrupole Coupling Constants (NQCC) is presented for intra- and intermolecular hydrogen bonds. The second part of this thesis presents the computation of electric field gradients within the Gaussian and Augmented Plane Waves (GAPW) framework, that allows for all-electron calculations in periodic systems. This recent development improves the accuracy of many calculations compared to the pseudopotential approximation, which treats the core electrons as part of an effective potential. In combination with MD simulations of water, the NMR longitudinal relaxation times for 17O and 2H have been obtained. The results show a considerable agreement with the experiment. Finally, an implementation of the calculation of the stress tensor into the quantum chemical program suite CP2K is presented. This enables MD simulations under constant pressure conditions, which is demonstrated with a series of liquid water simulations, that sheds light on the influence of the exchange-correlation functional used on the density of the simulated liquid.
Resumo:
Herbicides are becoming emergent contaminants in Italian surface, coastal and ground waters, due to their intensive use in agriculture. In marine environments herbicides have adverse effects on non-target organisms, as primary producers, resulting in oxygen depletion and decreased primary productivity. Alterations of species composition in algal communities can also occur due to the different sensitivity among the species. In the present thesis the effects of herbicides, widely used in the Northern Adriatic Sea, on different algal species were studied. The main goal of this work was to study the influence of temperature on algal growth in the presence of the triazinic herbicide terbuthylazine (TBA), and the cellular responses adopted to counteract the toxic effects of the pollutant (Chapter 1 and 2). The development of simulation models to be applied in environmental management are needed to organize and track information in a way that would not be possible otherwise and simulate an ecological prospective. The data collected from laboratory experiments were used to simulate algal responses to the TBA exposure at increasing temperature conditions (Chapter 3). Part of the thesis was conducted in foreign countries. The work presented in Chapter 4 was focused on the effect of high light on growth, toxicity and mixotrophy of the ichtyotoxic species Prymnesium parvum. In addition, a mesocosm experiment was conducted in order to study the synergic effect of the pollutant emamectin benzoate with other anthropogenic stressors, such as oil pollution and induced phytoplankton blooms (Chapter 5).
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
An Integrated Transmission-Media Noise Calibration Software For Deep-Space Radio Science Experiments
Resumo:
The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.
Resumo:
This thesis describes the developments of new models and toolkits for the orbit determination codes to support and improve the precise radio tracking experiments of the Cassini-Huygens mission, an interplanetary mission to study the Saturn system. The core of the orbit determination process is the comparison between observed observables and computed observables. Disturbances in either the observed or computed observables degrades the orbit determination process. Chapter 2 describes a detailed study of the numerical errors in the Doppler observables computed by NASA's ODP and MONTE, and ESA's AMFIN. A mathematical model of the numerical noise was developed and successfully validated analyzing against the Doppler observables computed by the ODP and MONTE, with typical relative errors smaller than 10%. The numerical noise proved to be, in general, an important source of noise in the orbit determination process and, in some conditions, it may becomes the dominant noise source. Three different approaches to reduce the numerical noise were proposed. Chapter 3 describes the development of the multiarc library, which allows to perform a multi-arc orbit determination with MONTE. The library was developed during the analysis of the Cassini radio science gravity experiments of the Saturn's satellite Rhea. Chapter 4 presents the estimation of the Rhea's gravity field obtained from a joint multi-arc analysis of Cassini R1 and R4 fly-bys, describing in details the spacecraft dynamical model used, the data selection and calibration procedure, and the analysis method followed. In particular, the approach of estimating the full unconstrained quadrupole gravity field was followed, obtaining a solution statistically not compatible with the condition of hydrostatic equilibrium. The solution proved to be stable and reliable. The normalized moment of inertia is in the range 0.37-0.4 indicating that Rhea's may be almost homogeneous, or at least characterized by a small degree of differentiation.
Resumo:
Aerosolpartikel beeinflussen das Klima durch Streuung und Absorption von Strahlung sowie als Nukleations-Kerne für Wolkentröpfchen und Eiskristalle. Darüber hinaus haben Aerosole einen starken Einfluss auf die Luftverschmutzung und die öffentliche Gesundheit. Gas-Partikel-Wechselwirkunge sind wichtige Prozesse, weil sie die physikalischen und chemischen Eigenschaften von Aerosolen wie Toxizität, Reaktivität, Hygroskopizität und optische Eigenschaften beeinflussen. Durch einen Mangel an experimentellen Daten und universellen Modellformalismen sind jedoch die Mechanismen und die Kinetik der Gasaufnahme und der chemischen Transformation organischer Aerosolpartikel unzureichend erfasst. Sowohl die chemische Transformation als auch die negativen gesundheitlichen Auswirkungen von toxischen und allergenen Aerosolpartikeln, wie Ruß, polyzyklische aromatische Kohlenwasserstoffe (PAK) und Proteine, sind bislang nicht gut verstanden.rn Kinetische Fluss-Modelle für Aerosoloberflächen- und Partikelbulk-Chemie wurden auf Basis des Pöschl-Rudich-Ammann-Formalismus für Gas-Partikel-Wechselwirkungen entwickelt. Zunächst wurde das kinetische Doppelschicht-Oberflächenmodell K2-SURF entwickelt, welches den Abbau von PAK auf Aerosolpartikeln in Gegenwart von Ozon, Stickstoffdioxid, Wasserdampf, Hydroxyl- und Nitrat-Radikalen beschreibt. Kompetitive Adsorption und chemische Transformation der Oberfläche führen zu einer stark nicht-linearen Abhängigkeit der Ozon-Aufnahme bezüglich Gaszusammensetzung. Unter atmosphärischen Bedingungen reicht die chemische Lebensdauer von PAK von wenigen Minuten auf Ruß, über mehrere Stunden auf organischen und anorganischen Feststoffen bis hin zu Tagen auf flüssigen Partikeln. rn Anschließend wurde das kinetische Mehrschichtenmodell KM-SUB entwickelt um die chemische Transformation organischer Aerosolpartikel zu beschreiben. KM-SUB ist in der Lage, Transportprozesse und chemische Reaktionen an der Oberfläche und im Bulk von Aerosol-partikeln explizit aufzulösen. Es erforder im Gegensatz zu früheren Modellen keine vereinfachenden Annahmen über stationäre Zustände und radiale Durchmischung. In Kombination mit Literaturdaten und neuen experimentellen Ergebnissen wurde KM-SUB eingesetzt, um die Effekte von Grenzflächen- und Bulk-Transportprozessen auf die Ozonolyse und Nitrierung von Protein-Makromolekülen, Ölsäure, und verwandten organischen Ver¬bin-dungen aufzuklären. Die in dieser Studie entwickelten kinetischen Modelle sollen als Basis für die Entwicklung eines detaillierten Mechanismus für Aerosolchemie dienen sowie für das Herleiten von vereinfachten, jedoch realistischen Parametrisierungen für großskalige globale Atmosphären- und Klima-Modelle. rn Die in dieser Studie durchgeführten Experimente und Modellrechnungen liefern Beweise für die Bildung langlebiger reaktiver Sauerstoff-Intermediate (ROI) in der heterogenen Reaktion von Ozon mit Aerosolpartikeln. Die chemische Lebensdauer dieser Zwischenformen beträgt mehr als 100 s, deutlich länger als die Oberflächen-Verweilzeit von molekularem O3 (~10-9 s). Die ROIs erklären scheinbare Diskrepanzen zwischen früheren quantenmechanischen Berechnungen und kinetischen Experimenten. Sie spielen eine Schlüsselrolle in der chemischen Transformation sowie in den negativen Gesundheitseffekten von toxischen und allergenen Feinstaubkomponenten, wie Ruß, PAK und Proteine. ROIs sind vermutlich auch an der Zersetzung von Ozon auf mineralischem Staub und an der Bildung sowie am Wachstum von sekundären organischen Aerosolen beteiligt. Darüber hinaus bilden ROIs eine Verbindung zwischen atmosphärischen und biosphärischen Mehrphasenprozessen (chemische und biologische Alterung).rn Organische Verbindungen können als amorpher Feststoff oder in einem halbfesten Zustand vorliegen, der die Geschwindigkeit von heterogenen Reaktionenen und Mehrphasenprozessen in Aerosolen beeinflusst. Strömungsrohr-Experimente zeigen, dass die Ozonaufnahme und die oxidative Alterung von amorphen Proteinen durch Bulk-Diffusion kinetisch limitiert sind. Die reaktive Gasaufnahme zeigt eine deutliche Zunahme mit zunehmender Luftfeuchte, was durch eine Verringerung der Viskosität zu erklären ist, bedingt durch einen Phasenübergang der amorphen organischen Matrix von einem glasartigen zu einem halbfesten Zustand (feuchtigkeitsinduzierter Phasenübergang). Die chemische Lebensdauer reaktiver Verbindungen in organischen Partikeln kann von Sekunden bis zu Tagen ansteigen, da die Diffusionsrate in der halbfesten Phase bei niedriger Temperatur oder geringer Luftfeuchte um Größenordnungen absinken kann. Die Ergebnisse dieser Studie zeigen wie halbfeste Phasen die Auswirkung organischeer Aerosole auf Luftqualität, Gesundheit und Klima beeinflussen können. rn
Resumo:
Topic of this thesis is the development of experiments behind the gas-filled separator TASCA(TransActinide Separator and Chemistry Apparatus) to study the chemical properties of the transactinide elements.rnIn the first part of the thesis, the electrodepositions of short-lived isotopes of ruthenium and osmium on gold electrodes were studied as model experiments for hassium. From literature it is known that the deposition potential of single atoms differs significantly from the potential predicted by the Nernst equation. This shift of the potential depends on the adsorption enthalpy of therndeposited element on the electrode material. If the adsorption on the electrode-material is favoured over the adsorption on a surface made of the same element as the deposited atom, the electrode potential is shifted to higher potentials. This phenomenon is called underpotential deposition.rnPossibilities to automatize an electro chemistry experiment behind the gas-filled separator were explored for later studies with transactinide elements.rnThe second part of this thesis is about the in-situ synthesis of transition-metal-carbonyl complexes with nuclear reaction products. Fission products of uranium-235 and californium-249 were produced at the TRIGA Mainz reactor and thermalized in a carbon-monoxide containing atmosphere. The formed volatile metal-carbonyl complexes could be transported in a gas-stream.rnFurthermore, short-lived isotopes of tungsten, rhenium, osmium, and iridium were synthesised at the linear accelerator UNILAC at GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt. The recoiling fusion products were separated from the primary beam and the transfer products in the gas-filled separator TASCA. The fusion products were stopped in the focal plane of TASCA in a recoil transfer chamber. This chamber contained a carbon-monoxide – helium gas mixture. The formed metal-carbonyl complexes could be transported in a gas stream to various experimental setups. All synthesised carbonyl complexes were identified by nuclear decay spectroscopy. Some complexes were studied with isothermal chromatography or thermochromatography methods. The chromatograms were compared with Monte Carlo Simulations to determine the adsorption enthalpyrnon silicon dioxide and on gold. These simulations based on existing codes, that were modified for the different geometries of the chromatography channels. All observed adsorption enthalpies (on silcon oxide as well as on gold) are typical for physisorption. Additionally, the thermalstability of some of the carbonyl complexes was studied. This showed that at temperatures above 200 °C therncomplexes start to decompose.rnIt was demonstrated that carbonyl-complex chemistry is a suitable method to study rutherfordium, dubnium, seaborgium, bohrium, hassium, and meitnerium. Until now, only very simple, thermally stable compounds have been synthesized in the gas-phase chemistry of the transactindes. With the synthesis of transactinide-carbonyl complexes a new compound class would be discovered. Transactinide chemistry would reach the border between inorganic and metallorganic chemistry.rnFurthermore, the in-situ synthesised carbonyl complexes would allow nuclear spectroscopy studies under low background conditions making use of chemically prepared samples.
Resumo:
The thesis comprises three essays that use experimental methods, one about other-regarding motivations in economic behavior and the others on pro-social behavior in two environmental economics problems. The first chapter studies how the expectations of the others and the concern to maintain a balance between effort exerted and rewards obtained interact in shaping the behavior in a modified dictator game. We find that dictators condition their choices on recipients' expectations only when there is a high probability that the the recipient will not be compensated for her effort. Otherwise, dictators tend to balance the efforts and rewards of the recipients, irrespective of the recipients' expectations. In the second chapter, I investigate the problem of local opposition to large public projects (e.g. landfills, incinerators, etc.). In particular, the experiment shows how the uncertainty about the project's quality makes the community living in the host site skeptical about the project. I also test whether side-transfers and costly information disclosure can help to increase the efficiency. Both tools succesfully make the host more willing to accept the project, but they lead to the realization of different types of projects. The last chapter is an experiment on climate negotiations. To avoid the global warming, countries are called to cooperate in the abatement of their emissions. We study whether the dynamic aspect of the climate change makes cooperation across countries behaviorally more difficult. We also consider inequality across countries as a possible factor that hinders international cooperation.
Potential vorticity and moisture in extratropical cyclones : climatology and sensitivity experiments
Resumo:
The development of extratropical cyclones can be seen as an interplay of three positive potential vorticity (PV) anomalies: an upper-level stratospheric intrusion, low-tropospheric diabatically produced PV, and a warm anomaly at the surface acting as a surrogate PV anomaly. In the mature stage they become vertically aligned and form a “PV tower” associated with strong cyclonic circulation. This paradigm of extratropical cyclone development provides the basis of this thesis, which will use a climatological dataset and numerical model experiments to investigate the amplitude of the three anomalies and the processes leading in particular to the formation of the diabatically produced low-tropospheric PV anomaly.rnrnThe first part of this study, based on the interim ECMWF Re-Analysis (ERA-Interim) dataset, quantifies the amplitude of the three PV anomalies in mature extratropical cyclones in different regions in the Northern Hemisphere on a climatological basis. A tracking algorithm is applied to sea level pressure (SLP) fields to identify cyclone tracks. Surface potential temperature anomalies ∆θ and vertical profiles of PV anomalies ∆PV are calculated at the time of the cyclones’ minimum SLP and during the intensification phase 24 hours before in a vertical cylinder with a radius of 200 km around the surface cyclone center. To compare the characteristics of the cyclones, they are grouped according to their location (8 regions) and intensity, where the central SLP is used as a measure of intensity. Composites of ∆PV profiles and ∆θ are calculated for each region and intensity class at the time of minimum SLP and during the cyclone intensification phase.rnrnDuring the cyclones’ development stage the amplitudes of all three anomalies increase on average. In the mature stage all three anomalies are typically larger for intense than for weak winter cyclones [e.g., 0.6 versus 0.2 potential vorticity units (PVU) at lower levels, and 1.5 versus 0.5 PVU at upper levels].rnThe regional variability of the cyclones’ vertical structure and the profile evolution is prominent (cyclones in some regions are more sensitive to the amplitude of a particular anomaly than in other regions). Values of ∆θ and low-level ∆PV are on average larger in the western parts of the oceans than in the eastern parts. In addition, a large seasonal variability can be identified, with fewer and weaker cyclones especially in the summer, associated with higher low-tropospheric PV values, but also with a higher tropopause and much weaker surface potential temperature anomalies (compared to winter cyclones).rnrnIn the second part, we were interested in the diabatic low-level part of PV towers. Evaporative sources were identified of moisture that was involved in PV production through condensation. Lagrangian backward trajectories were calculated from the region with high PV values at low-levels in the cyclones. PV production regions were identified along these trajectories and from these regions a new set of backward trajectories was calculated and moisture uptakes were traced along them. The main contribution from surface evaporation to the specific humidity of the trajectories is collected 12-72 hours prior to therntime of PV production. The uptake region for weaker cyclones with less PV in the centre is typically more localized with reduced uptake values compared to intense cyclones. However, in a qualitative sense uptakes and other variables along single trajectories do not vary much between cyclones of different intensity in different regions.rnrnA sensitivity study with the COSMO model comprises the last part of this work. The study aims at investigating the influence of synthetic moisture modification in the cyclone environment in different stages of its development. Moisture was eliminated in three regions, which were identified as important moisture source regions for PV production. Moisture suppression affected the cyclone the most in its early phase. It led to cyclolysis shortly after its genesis. Nevertheles, a new cyclone formed on the other side of a dry box and developed relatively quickly. Also in other experiments, moisture elimination led to strong intensity reduction of the surface cyclone, limited upper-level development, and delayed or missing interaction between the two.rnrnIn summary, this thesis provides novel insight into the structure of different intensity categories of extratropical cyclones from a PV perspective, which corroborates the findings from a series of previous case studies. It reveals that all three PV anomalies are typically enhanced for more intense cyclones, with important regional differences concerning the relative amplitude of the three anomalies. The moisture source analysis is the first of this kind to study the evaporation-condensation cycle related to the intensification of extratropical cyclones. Interestingly, most of the evaporation occurs during the 3 days prior to the time of maximum cyclone intensity and typically extends over fairly large areas along the track of the cyclone. The numerical model case study complements this analysis by analyzing the impact of regionally confined moisture sources for the evolution of the cyclone.