4 resultados para Automatic Calibration

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of this thesis is the development of a Gaschromatography (GC) system for non-methane hydrocarbons (NMHCs) and measurement of samples within the project CARIBIC (Civil Aircraft for the Regular Investigation of the atmosphere Based on an Instrument Container, www.caribic-atmospheric.com). Air samples collected at cruising altitude from the upper troposphere and lowermost stratosphere contain hydrocarbons at low levels (ppt range), which imposes substantial demands on detection limits. Full automation enabled to maintain constant conditions during the sample processing and analyses. Additionally, automation allows overnight operation thus saving time. A gas chromatography using flame ionization detection (FID) together with the dual column approach enables simultaneous detection with almost equal carbon atom response for all hydrocarbons except for ethyne. The first part of this thesis presents the technical descriptions of individual parts of the analytical system. Apart from the sample treatment and calibration procedures, the sample collector is described. The second part deals with analytical performance of the GC system by discussing tests that had been made. Finally, results for measurement flight are assessed in terms of quality of the data and two flights are discussed in detail. Analytical performance is characterized using detection limits for each compound, using uncertainties for each compound, using tests of calibration mixture conditioning and carbon dioxide trap to find out their influence on analyses, and finally by comparing the responses of calibrated substances during period when analyses of the flights were made. Comparison of both systems shows good agreement. However, because of insufficient capacity of the CO2 trap the signal of one column was suppressed due to breakthroughed carbon dioxide so much that its results appeared to be unreliable. Plausibility tests for the internal consistency of the given data sets are based on common patterns exhibited by tropospheric NMHCs. All tests show that samples from the first flights do not comply with the expected pattern. Additionally, detected alkene artefacts suggest potential problems with storing or contamination within all measurement flights. Two last flights # 130-133 and # 166-169 comply with the tests therefore their detailed analysis is made. Samples were analyzed in terms of their origin (troposphere vs. stratosphere, backward trajectories), their aging (NMHCs ratios) and detected plumes were compared to chemical signatures of Asian outflows. In the last chapter a future development of the presented system with focus on separation is drawn. An extensive appendix documents all important aspects of the dissertation from theoretical introduction through illustration of sample treatment to overview diagrams for the measured flights.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atmosphärische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschätzen. Partikel können hinsichtlich ihrer Größe, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es möglich die Größe und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphärischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen möglichst weiten Partikelgrößenbereich (80 nm - 3 µm) in das SPLAT zu transferieren und zu einem feinen Strahl zu bündeln. Eine neue Beschreibung für die Beziehung der Partikelgröße zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Größe < 100 nm erfasst werden können. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der höchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Großteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelösung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthält einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrückung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfügung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Möglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abläuft und die Ergebnisse eine höhere Qualität aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fähigkeiten von CRISP konnten anhand der gewonnen Datensätze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphärischen Aerosols betrieben zu werden, während CRISP eine schnelle und gezielte Auswertung der Daten ermöglicht.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The accretionary shells of bivalve mollusks can provide environmental information, such as water temperature, precipitation, freshwater fluxes, primary productivity and anthropogenic activities in the form of variable growth rates and variable geochemical properties, such as stable oxygen and carbon isotopes. However, paleoenvironmental reconstructions are constrained by uncertainties about isotopic equilibrium fractionation during shell formation, which is generally acknowledged as a reasonable assumption for bivalves, but it has been disputed in several species. Furthermore, the variation in shell growth rates is accepted to rely on multiple environmental variables, such as temperature, food availability and salinity, but can differ from species to species. Therefore, it is necessary to perform species-specific calibration studies for both isotope proxies and shell growth rates before they can be used with confidence for environmental interpretations of the past. Accordingly, the principal objective of this Ph.D research is to examine the reliability of selected bivalve species, the long-lived Eurhomalea exalbida (Dillwyn), the short-lived and fast growing species Paphia undulata (Born 1778), and the freshwater mussel Margaritifera falcata (Gould 1850), as paleoenvironmental proxy archives.rnThe first part is focused on δ18Oshell and shell growth history of live-collected E. exalbida from the Falkland Islands. The most remarkable finding, however, is that E. exalbida formed its shell with an offset of -0.48‰ to -1.91‰ from the expected oxygen isotopic equilibrium with the ambient water. If this remained unnoticed, paleotemperature estimates would overestimate actual water temperatures by 2.1-8.3°C. With increasing ontogenetic age, the discrepancy between measured and reconstructed temperatures increased exponentially, irrespective of the seasonally varying shell growth rates. This study clearly demonstrates that, when the disequilibrium fractionation effect is taken into account, E. exalbida can serve as a high-resolution paleoclimate archive for the southern South America. The species therefore provides quantifiable temperature estimates, which yields new insights into long-term paleoclimate dynamics for mid to high latitudes on the southern hemisphere.rnThe stable carbon isotope of biogenic carbonates is generally considered to be useful for reconstruction of seawater dissolved inorganic carbon. The δ13Cshell composition of E. exalbida was therefore, investigated in the second part of this study. This chapter focuses on inter-annual and intra-annual variations in δ13Cshell. Environmental records in δ13Cshell are found to be strongly obscured by changes in shell growth rates, even if removing the ontogenetic decreasing trend. This suggests that δ13Cshell in E. exalbida may not be useful as an environmental proxy, but a potential tool for ecological investigations. rnIn addition to long-lived bivalve species, short-lived species that secrete their shells extremely fast, can also be useful for environmental reconstructions, especially as a high-resolution recorder. Therefore, P. undulata from Daya Bay, South China Sea was utilized in Chapter 4 to evaluate and establish a potential proxy archive for past variations of the East Asian monsoon on shorter time-scales. The δ18Oshell can provide qualitative estimates of the amount of monsoonal rain and terrestrial runoff and the δ13Cshell likely reflect the relative amount of isotopically light terrestrial carbon that reaches the ocean during the summer monsoon season. Therefore, shells of P. undulata can provide serviceable proxy archives to reconstruct the frequency of exceptional summer monsoons in the past. The relative strength of monsoon-related precipitation and associated changes in ocean salinity and the δ13C ratios of the dissolved inorganic carbon signature (δ13CDIC) can be estimated from the δ18Oshell and δ13Cshell values as well as shell growth patterns. rnIn the final part, the freshwater pearl shell M. falcata from four rivers in British Columbia, Canada was preliminarily studied concerning the lifespans and the shell growth rates. Two groups separated by the Georgia Strait can be clearly distinguished. Specimens from the western group exhibit a shorter lifespan, while the eastern group live longer. Moreover, the average lifespan seems to decrease from south to north. The computed growth equations from the eastern and western groups differ as well. The western group exhibits a lower growth rate, while bivalves from the eastern group grow faster. The land use history seems to be responsible for the differences in lifespans of the specimens from the two groups. Differences in growth rate may be induced by differences in water temperature or nutrient input also related to the land use activities.