869 resultados para Constraint based modeling
Resumo:
Photovoltaic (PV) conversion is the direct production of electrical energy from sun without involving the emission of polluting substances. In order to be competitive with other energy sources, cost of the PV technology must be reduced ensuring adequate conversion efficiencies. These goals have motivated the interest of researchers in investigating advanced designs of crystalline silicon solar (c-Si) cells. Since lowering the cost of PV devices involves the reduction of the volume of semiconductor, an effective light trapping strategy aimed at increasing the photon absorption is required. Modeling of solar cells by electro-optical numerical simulation is helpful to predict the performance of future generations devices exhibiting advanced light-trapping schemes and to provide new and more specific guidelines to industry. The approaches to optical simulation commonly adopted for c-Si solar cells may lead to inaccurate results in case of thin film and nano-stuctured solar cells. On the other hand, rigorous solvers of Maxwell equations are really cpu- and memory-intensive. Recently, in optical simulation of solar cells, the RCWA method has gained relevance, providing a good trade-off between accuracy and computational resources requirement. This thesis is a contribution to the numerical simulation of advanced silicon solar cells by means of a state-of-the-art numerical 2-D/3-D device simulator, that has been successfully applied to the simulation of selective emitter and the rear point contact solar cells, for which the multi-dimensionality of the transport model is required in order to properly account for all physical competing mechanisms. In the second part of the thesis, the optical problems is discussed. Two novel and computationally efficient RCWA implementations for 2-D simulation domains as well as a third RCWA for 3-D structures based on an eigenvalues calculation approach have been presented. The proposed simulators have been validated in terms of accuracy, numerical convergence, computation time and correctness of results.
Resumo:
From the perspective of a new-generation opto-electronic technology based on organic semiconductors, a major objective is to achieve a deep and detailed knowledge of the structure-property relationships, in order to optimize the electronic, optical, and charge transport properties by tuning the chemical-physical characteristics of the compounds. The purpose of this dissertation is to contribute to such understanding, through suitable theoretical and computational studies. Precisely, the structural, electronic, optical, and charge transport characteristics of several promising organic materials recently synthesized are investigated by means of an integrated approach encompassing quantum-chemical calculations, molecular dynamics and kinetic Monte Carlo simulations. Particular care is addressed to the rationalization of optical and charge transport properties in terms of both intra- and intermolecular features. Moreover, a considerable part of this project involves the development of a home-made set of procedures and parts of software code required to assist the modeling of charge transport properties in the framework of the non-adiabatic hopping mechanism applied to organic crystalline materials. As a first part of my investigations, I mainly discuss the optical, electronic, and structural properties of several core-extended rylene derivatives, which can be regarded to as model compounds for graphene nanoribbons. Two families have been studied, consisting in bay-linked perylene bisimide oligomers and N-annulated rylenes. Beside rylene derivatives, my studies also concerned electronic and spectroscopic properties of tetracene diimides, quinoidal oligothiophenes, and oxygen doped picene. As an example of device application, I studied the structural characteristics governing the efficiency of resistive molecular memories based on a derivative of benzoquinone. Finally, as a second part of my investigations, I concentrate on the charge transport properties of perylene bisimides derivatives. Precisely, a comprehensive study of the structural and thermal effects on the charge transport of several core-twisted chlorinated and fluoro-alkylated perylene bisimide n-type semiconductors is presented.
Resumo:
This master’s thesis describes the research done at the Medical Technology Laboratory (LTM) of the Rizzoli Orthopedic Institute (IOR, Bologna, Italy), which focused on the characterization of the elastic properties of the trabecular bone tissue, starting from october 2012 to present. The approach uses computed microtomography to characterize the architecture of trabecular bone specimens. With the information obtained from the scanner, specimen-specific models of trabecular bone are generated for the solution with the Finite Element Method (FEM). Along with the FEM modelling, mechanical tests are performed over the same reconstructed bone portions. From the linear-elastic stage of mechanical tests presented by experimental results, it is possible to estimate the mechanical properties of the trabecular bone tissue. After a brief introduction on the biomechanics of the trabecular bone (chapter 1) and on the characterization of the mechanics of its tissue using FEM models (chapter 2), the reliability analysis of an experimental procedure is explained (chapter 3), based on the high-scalable numerical solver ParFE. In chapter 4, the sensitivity analyses on two different parameters for micro-FEM model’s reconstruction are presented. Once the reliability of the modeling strategy has been shown, a recent layout for experimental test, developed in LTM, is presented (chapter 5). Moreover, the results of the application of the new layout are discussed, with a stress on the difficulties connected to it and observed during the tests. Finally, a prototype experimental layout for the measure of deformations in trabecular bone specimens is presented (chapter 6). This procedure is based on the Digital Image Correlation method and is currently under development in LTM.
Resumo:
Photovoltaic (PV) solar panels generally produce electricity in the 6% to 16% efficiency range, the rest being dissipated in thermal losses. To recover this amount, hybrid photovoltaic thermal systems (PVT) have been devised. These are devices that simultaneously convert solar energy into electricity and heat. It is thus interesting to study the PVT system globally from different point of views in order to evaluate advantages and disadvantages of this technology and its possible uses. In particular in Chapter II, the development of the PVT absorber numerical optimization by a genetic algorithm has been carried out analyzing different internal channel profiles in order to find a right compromise between performance and technical and economical feasibility. Therefore in Chapter III ,thanks to a mobile structure built into the university lab, it has been compared experimentally electrical and thermal output power from PVT panels with separated photovoltaic and solar thermal productions. Collecting a lot of experimental data based on different seasonal conditions (ambient temperature,irradiation, wind...),the aim of this mobile structure has been to evaluate average both thermal and electrical increasing and decreasing efficiency values obtained respect to separate productions through the year. In Chapter IV , new PVT and solar thermal equation based models in steady state conditions have been developed by software Dymola that uses Modelica language. This permits ,in a simplified way respect to previous system modelling softwares, to model and evaluate different concepts about PVT panel regarding its structure before prototyping and measuring it. Chapter V concerns instead the definition of PVT boundary conditions into a HVAC system . This was made trough year simulations by software Polysun in order to finally assess the best solar assisted integrated structure thanks to F_save(solar saving energy)factor. Finally, Chapter VI presents the conclusion and the perspectives of this PhD work.
Resumo:
In the last few years the resolution of numerical weather prediction (nwp) became higher and higher with the progresses of technology and knowledge. As a consequence, a great number of initial data became fundamental for a correct initialization of the models. The potential of radar observations has long been recognized for improving the initial conditions of high-resolution nwp models, while operational application becomes more frequent. The fact that many nwp centres have recently taken into operations convection-permitting forecast models, many of which assimilate radar data, emphasizes the need for an approach to providing quality information which is needed in order to avoid that radar errors degrade the model's initial conditions and, therefore, its forecasts. Environmental risks can can be related with various causes: meteorological, seismical, hydrological/hydraulic. Flash floods have horizontal dimension of 1-20 Km and can be inserted in mesoscale gamma subscale, this scale can be modeled only with nwp model with the highest resolution as the COSMO-2 model. One of the problems of modeling extreme convective events is related with the atmospheric initial conditions, in fact the scale dimension for the assimilation of atmospheric condition in an high resolution model is about 10 Km, a value too high for a correct representation of convection initial conditions. Assimilation of radar data with his resolution of about of Km every 5 or 10 minutes can be a solution for this problem. In this contribution a pragmatic and empirical approach to deriving a radar data quality description is proposed to be used in radar data assimilation and more specifically for the latent heat nudging (lhn) scheme. Later the the nvective capabilities of the cosmo-2 model are investigated through some case studies. Finally, this work shows some preliminary experiments of coupling of a high resolution meteorological model with an Hydrological one.
Resumo:
During the last decade peach and nectarine fruit have lost considerable market share, due to increased consumer dissatisfaction with quality at retail markets. This is mainly due to harvesting of too immature fruit and high ripening heterogeneity. The main problem is that the traditional used maturity indexes are not able to objectively detect fruit maturity stage, neither the variability present in the field, leading to a difficult post-harvest management of the product and to high fruit losses. To assess more precisely the fruit ripening other techniques and devices can be used. Recently, a new non-destructive maturity index, based on the vis-NIR technology, the Index of Absorbance Difference (IAD), that correlates with fruit degreening and ethylene production, was introduced and the IAD was used to study peach and nectarine fruit ripening from the “field to the fork”. In order to choose the best techniques to improve fruit quality, a detailed description of the tree structure, of fruit distribution and ripening evolution on the tree was faced. More in details, an architectural model (PlantToon®) was used to design the tree structure and the IAD was applied to characterize the maturity stage of each fruit. Their combined use provided an objective and precise evaluation of the fruit ripening variability, related to different training systems, crop load, fruit exposure and internal temperature. Based on simple field assessment of fruit maturity (as IAD) and growth, a model for an early prediction of harvest date and yield, was developed and validated. The relationship between the non-destructive maturity IAD, and the fruit shelf-life, was also confirmed. Finally the obtained results were validated by consumer test: the fruit sorted in different maturity classes obtained a different consumer acceptance. The improved knowledge, leaded to an innovative management of peach and nectarine fruit, from “field to market”.
Resumo:
Microalgae cultures are attracting great attentions in many industrial applications. However, one of the technical challenges is to cut down the capital and operational costs of microalgae production systems, with special difficulty in reactor design and scale-up. The thesis work open with an overview on the microalgae cultures as a possible answer to solve some of the upcoming planet issues and their applications in several fields. After the work offers a general outline on the state of the art of microalgae culture systems, taking a special look to the enclosed photobioreactors (PBRs). The overall objective of this study is to advance the knowledge of PBRs design and lead to innovative large scale processes of microalgae cultivation. An airlift flat panel photobioreactor was designed, modeled and experimentally characterized. The gas holdup, liquid flow velocity and oxygen mass transfer of the reactor were experimentally determined and mathematically modeled, and the performance of the reactor was tested by cultivation of microalgae. The model predicted data correlated well with experimental data, and the high concentration of suspension cell culture could be achieved with controlled conditions. The reactor was inoculated with the algal strain Scenedesmus obliquus sp. first and with Chlorella sp. later and sparged with air. The reactor was operated in batch mode and daily monitored for pH, temperature, and biomass concentration and activity. The productivity of the novel device was determined, suggesting the proposed design can be effectively and economically used in carbon dioxide mitigation technologies and in the production of algal biomass for biofuel and other bioproducts. Those research results favored the possibility of scaling the reactor up into industrial scales based on the models employed, and the potential advantages and disadvantages were discussed for this novel industrial design.
Resumo:
Dealing with latent constructs (loaded by reflective and congeneric measures) cross-culturally compared means studying how these unobserved variables vary, and/or covary each other, after controlling for possibly disturbing cultural forces. This yields to the so-called ‘measurement invariance’ matter that refers to the extent to which data collected by the same multi-item measurement instrument (i.e., self-reported questionnaire of items underlying common latent constructs) are comparable across different cultural environments. As a matter of fact, it would be unthinkable exploring latent variables heterogeneity (e.g., latent means; latent levels of deviations from the means (i.e., latent variances), latent levels of shared variation from the respective means (i.e., latent covariances), levels of magnitude of structural path coefficients with regard to causal relations among latent variables) across different populations without controlling for cultural bias in the underlying measures. Furthermore, it would be unrealistic to assess this latter correction without using a framework that is able to take into account all these potential cultural biases across populations simultaneously. Since the real world ‘acts’ in a simultaneous way as well. As a consequence, I, as researcher, may want to control for cultural forces hypothesizing they are all acting at the same time throughout groups of comparison and therefore examining if they are inflating or suppressing my new estimations with hierarchical nested constraints on the original estimated parameters. Multi Sample Structural Equation Modeling-based Confirmatory Factor Analysis (MS-SEM-based CFA) still represents a dominant and flexible statistical framework to work out this potential cultural bias in a simultaneous way. With this dissertation I wanted to make an attempt to introduce new viewpoints on measurement invariance handled under covariance-based SEM framework by means of a consumer behavior modeling application on functional food choices.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
The main goal of this thesis is to facilitate the process of industrial automated systems development applying formal methods to ensure the reliability of systems. A new formulation of distributed diagnosability problem in terms of Discrete Event Systems theory and automata framework is presented, which is then used to enforce the desired property of the system, rather then just verifying it. This approach tackles the state explosion problem with modeling patterns and new algorithms, aimed for verification of diagnosability property in the context of the distributed diagnosability problem. The concepts are validated with a newly developed software tool.
Resumo:
Body-centric communications are emerging as a new paradigm in the panorama of personal communications. Being concerned with human behaviour, they are suitable for a wide variety of applications. The advances in the miniaturization of portable devices to be placed on or around the body, foster the diffusion of these systems, where the human body is the key element defining communication characteristics. This thesis investigates the human impact on body-centric communications under its distinctive aspects. First of all, the unique propagation environment defined by the body is described through a scenario-based channel modeling approach, according to the communication scenario considered, i.e., on- or on- to off-body. The novelty introduced pertains to the description of radio channel features accounting for multiple sources of variability at the same time. Secondly, the importance of a proper channel characterisation is shown integrating the on-body channel model in a system level simulator, allowing a more realistic comparison of different Physical and Medium Access Control layer solutions. Finally, the structure of a comprehensive simulation framework for system performance evaluation is proposed. It aims at merging in one tool, mobility and social features typical of the human being, together with the propagation aspects, in a scenario where multiple users interact sharing space and resources.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
Ziel dieser Dissertation ist die experimentelle Charakterisierung und quantitative Beschreibung der Hybridisierung von komplementären Nukleinsäuresträngen mit oberflächengebundenen Fängermolekülen für die Entwicklung von integrierten Biosensoren. Im Gegensatz zu lösungsbasierten Verfahren ist mit Microarray Substraten die Untersuchung vieler Nukleinsäurekombinationen parallel möglich. Als biologisch relevantes Evaluierungssystem wurde das in Eukaryoten universell exprimierte Actin Gen aus unterschiedlichen Pflanzenspezies verwendet. Dieses Testsystem ermöglicht es, nahe verwandte Pflanzenarten auf Grund von geringen Unterschieden in der Gen-Sequenz (SNPs) zu charakterisieren. Aufbauend auf dieses gut studierte Modell eines House-Keeping Genes wurde ein umfassendes Microarray System, bestehend aus kurzen und langen Oligonukleotiden (mit eingebauten LNA-Molekülen), cDNAs sowie DNA und RNA Targets realisiert. Damit konnte ein für online Messung optimiertes Testsystem mit hohen Signalstärken entwickelt werden. Basierend auf den Ergebnissen wurde der gesamte Signalpfad von Nukleinsärekonzentration bis zum digitalen Wert modelliert. Die aus der Entwicklung und den Experimenten gewonnen Erkenntnisse über die Kinetik und Thermodynamik von Hybridisierung sind in drei Publikationen zusammengefasst die das Rückgrat dieser Dissertation bilden. Die erste Publikation beschreibt die Verbesserung der Reproduzierbarkeit und Spezifizität von Microarray Ergebnissen durch online Messung von Kinetik und Thermodynamik gegenüber endpunktbasierten Messungen mit Standard Microarrays. Für die Auswertung der riesigen Datenmengen wurden zwei Algorithmen entwickelt, eine reaktionskinetische Modellierung der Isothermen und ein auf der Fermi-Dirac Statistik beruhende Beschreibung des Schmelzüberganges. Diese Algorithmen werden in der zweiten Publikation beschrieben. Durch die Realisierung von gleichen Sequenzen in den chemisch unterschiedlichen Nukleinsäuren (DNA, RNA und LNA) ist es möglich, definierte Unterschiede in der Konformation des Riboserings und der C5-Methylgruppe der Pyrimidine zu untersuchen. Die kompetitive Wechselwirkung dieser unterschiedlichen Nukleinsäuren gleicher Sequenz und die Auswirkungen auf Kinetik und Thermodynamik ist das Thema der dritten Publikation. Neben der molekularbiologischen und technologischen Entwicklung im Bereich der Sensorik von Hybridisierungsreaktionen oberflächengebundener Nukleinsäuremolekülen, der automatisierten Auswertung und Modellierung der anfallenden Datenmengen und der damit verbundenen besseren quantitativen Beschreibung von Kinetik und Thermodynamik dieser Reaktionen tragen die Ergebnisse zum besseren Verständnis der physikalisch-chemischen Struktur des elementarsten biologischen Moleküls und seiner nach wie vor nicht vollständig verstandenen Spezifizität bei.
Resumo:
Aerosolpartikel beeinflussen das Klima durch Streuung und Absorption von Strahlung sowie als Nukleations-Kerne für Wolkentröpfchen und Eiskristalle. Darüber hinaus haben Aerosole einen starken Einfluss auf die Luftverschmutzung und die öffentliche Gesundheit. Gas-Partikel-Wechselwirkunge sind wichtige Prozesse, weil sie die physikalischen und chemischen Eigenschaften von Aerosolen wie Toxizität, Reaktivität, Hygroskopizität und optische Eigenschaften beeinflussen. Durch einen Mangel an experimentellen Daten und universellen Modellformalismen sind jedoch die Mechanismen und die Kinetik der Gasaufnahme und der chemischen Transformation organischer Aerosolpartikel unzureichend erfasst. Sowohl die chemische Transformation als auch die negativen gesundheitlichen Auswirkungen von toxischen und allergenen Aerosolpartikeln, wie Ruß, polyzyklische aromatische Kohlenwasserstoffe (PAK) und Proteine, sind bislang nicht gut verstanden.rn Kinetische Fluss-Modelle für Aerosoloberflächen- und Partikelbulk-Chemie wurden auf Basis des Pöschl-Rudich-Ammann-Formalismus für Gas-Partikel-Wechselwirkungen entwickelt. Zunächst wurde das kinetische Doppelschicht-Oberflächenmodell K2-SURF entwickelt, welches den Abbau von PAK auf Aerosolpartikeln in Gegenwart von Ozon, Stickstoffdioxid, Wasserdampf, Hydroxyl- und Nitrat-Radikalen beschreibt. Kompetitive Adsorption und chemische Transformation der Oberfläche führen zu einer stark nicht-linearen Abhängigkeit der Ozon-Aufnahme bezüglich Gaszusammensetzung. Unter atmosphärischen Bedingungen reicht die chemische Lebensdauer von PAK von wenigen Minuten auf Ruß, über mehrere Stunden auf organischen und anorganischen Feststoffen bis hin zu Tagen auf flüssigen Partikeln. rn Anschließend wurde das kinetische Mehrschichtenmodell KM-SUB entwickelt um die chemische Transformation organischer Aerosolpartikel zu beschreiben. KM-SUB ist in der Lage, Transportprozesse und chemische Reaktionen an der Oberfläche und im Bulk von Aerosol-partikeln explizit aufzulösen. Es erforder im Gegensatz zu früheren Modellen keine vereinfachenden Annahmen über stationäre Zustände und radiale Durchmischung. In Kombination mit Literaturdaten und neuen experimentellen Ergebnissen wurde KM-SUB eingesetzt, um die Effekte von Grenzflächen- und Bulk-Transportprozessen auf die Ozonolyse und Nitrierung von Protein-Makromolekülen, Ölsäure, und verwandten organischen Ver¬bin-dungen aufzuklären. Die in dieser Studie entwickelten kinetischen Modelle sollen als Basis für die Entwicklung eines detaillierten Mechanismus für Aerosolchemie dienen sowie für das Herleiten von vereinfachten, jedoch realistischen Parametrisierungen für großskalige globale Atmosphären- und Klima-Modelle. rn Die in dieser Studie durchgeführten Experimente und Modellrechnungen liefern Beweise für die Bildung langlebiger reaktiver Sauerstoff-Intermediate (ROI) in der heterogenen Reaktion von Ozon mit Aerosolpartikeln. Die chemische Lebensdauer dieser Zwischenformen beträgt mehr als 100 s, deutlich länger als die Oberflächen-Verweilzeit von molekularem O3 (~10-9 s). Die ROIs erklären scheinbare Diskrepanzen zwischen früheren quantenmechanischen Berechnungen und kinetischen Experimenten. Sie spielen eine Schlüsselrolle in der chemischen Transformation sowie in den negativen Gesundheitseffekten von toxischen und allergenen Feinstaubkomponenten, wie Ruß, PAK und Proteine. ROIs sind vermutlich auch an der Zersetzung von Ozon auf mineralischem Staub und an der Bildung sowie am Wachstum von sekundären organischen Aerosolen beteiligt. Darüber hinaus bilden ROIs eine Verbindung zwischen atmosphärischen und biosphärischen Mehrphasenprozessen (chemische und biologische Alterung).rn Organische Verbindungen können als amorpher Feststoff oder in einem halbfesten Zustand vorliegen, der die Geschwindigkeit von heterogenen Reaktionenen und Mehrphasenprozessen in Aerosolen beeinflusst. Strömungsrohr-Experimente zeigen, dass die Ozonaufnahme und die oxidative Alterung von amorphen Proteinen durch Bulk-Diffusion kinetisch limitiert sind. Die reaktive Gasaufnahme zeigt eine deutliche Zunahme mit zunehmender Luftfeuchte, was durch eine Verringerung der Viskosität zu erklären ist, bedingt durch einen Phasenübergang der amorphen organischen Matrix von einem glasartigen zu einem halbfesten Zustand (feuchtigkeitsinduzierter Phasenübergang). Die chemische Lebensdauer reaktiver Verbindungen in organischen Partikeln kann von Sekunden bis zu Tagen ansteigen, da die Diffusionsrate in der halbfesten Phase bei niedriger Temperatur oder geringer Luftfeuchte um Größenordnungen absinken kann. Die Ergebnisse dieser Studie zeigen wie halbfeste Phasen die Auswirkung organischeer Aerosole auf Luftqualität, Gesundheit und Klima beeinflussen können. rn
Resumo:
The composition of the atmosphere is frequently perturbed by the emission of gaseous and particulate matter from natural as well as anthropogenic sources. While the impact of trace gases on the radiative forcing of the climate is relatively well understood the role of aerosol is far more uncertain. Therefore, the study of the vertical distribution of particulate matter in the atmosphere and its chemical composition contribute valuable information to bridge this gap of knowledge. The chemical composition of aerosol reveals information on properties such as radiative behavior and hygroscopicity and therefore cloud condensation or ice nucleus potential. rnThis thesis focuses on aerosol pollution plumes observed in 2008 during the POLARCAT (Polar Study using Aircraft, Remote Sensing, Surface Measurements and Models, of Climate, Chemistry, Aerosols, and Transport) campaign over Greenland in June/July and CONCERT (Contrail and Cirrus Experiment) campaign over Central and Western Europe in October/November. Measurements were performed with an Aerodyne compact time-of-flight aerosol mass spectrometer (AMS) capable of online size-resolved chemical characterization of non-refractory submicron particles. In addition, the origins of pollution plumes were determined by means of modeling tools. The characterized pollution episodes originated from a large variety of sources and were encountered at distinct altitudes. They included pure natural emissions from two volcanic eruptions in 2008. By the time of detection over Western Europe between 10 and 12 km altitude the plume was about 3 months old and composed to 71 % of particulate sulfate and 21 % of carbonaceous compounds. Also, biomass burning (BB) plumes were observed over Greenland between 4 and 7 km altitude (free troposphere) originating from Canada and East Siberia. The long-range transport took roughly one and two weeks, respectively. The aerosol was composed of 78 % organic matter and 22 % particulate sulfate. Some Canadian and all Siberian BB plumes were mixed with anthropogenic emissions from fossil fuel combustion (FF) in North America and East Asia. It was found that the contribution of particulate sulfate increased with growing influences from anthropogenic activity and Asia reaching up to 37 % after more than two weeks of transport time. The most exclusively anthropogenic emission source probed in the upper troposphere was engine exhaust from commercial aircraft liners over Germany. However, in-situ characterization of this aerosol type during aircraft chasing was not possible. All long-range transport aerosol was found to have an O:C ratio close to or greater than 1 implying that low-volatility oxygenated organic aerosol was present in each case despite the variety of origins and the large range in age from 3 to 100 days. This leads to the conclusion that organic particulate matter reaches a final and uniform state of oxygenation after at least 3 days in the free troposphere. rnExcept for aircraft exhaust all emission sources mentioned above are surface-bound and thus rely on different types of vertical transport mechanisms, such as direct high altitude injection in the case of a volcanic eruption, or severe BB, or uplift by convection, to reach higher altitudes where particles can travel long distances before removal mainly caused by cloud scavenging. A lifetime for North American mixed BB and FF aerosol of 7 to 11 days was derived. This in consequence means that emission from surface point sources, e.g. volcanoes, or regions, e.g. East Asia, do not only have a relevant impact on the immediate surroundings but rather on a hemispheric scale including such climate sensitive zones as the tropopause or the Arctic.