12 resultados para Element free Galerkin method

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Allgemein erlaubt adaptive Gitterverfeinerung eine Steigerung der Effizienz numerischer Simulationen ohne dabei die Genauigkeit des Ergebnisses signifikant zu verschlechtern. Es ist jedoch noch nicht erforscht, in welchen Bereichen des Rechengebietes die räumliche Auflösung tatsächlich vergröbert werden kann, ohne die Genauigkeit des Ergebnisses signifikant zu beeinflussen. Diese Frage wird hier für ein konkretes Beispiel von trockener atmosphärischer Konvektion untersucht, nämlich der Simulation von warmen Luftblasen. Zu diesem Zweck wird ein neuartiges numerisches Modell entwickelt, das auf diese spezielle Anwendung ausgerichtet ist. Die kompressiblen Euler-Gleichungen werden mit einer unstetigen Galerkin Methode gelöst. Die Zeitintegration geschieht mit einer semi-implizite Methode und die dynamische Adaptivität verwendet raumfüllende Kurven mit Hilfe der Funktionsbibliothek AMATOS. Das numerische Modell wird validiert mit Hilfe einer Konvergenzstudie und fünf Standard-Testfällen. Eine Methode zum Vergleich der Genauigkeit von Simulationen mit verschiedenen Verfeinerungsgebieten wird eingeführt, die ohne das Vorhandensein einer exakten Lösung auskommt. Im Wesentlichen geschieht dies durch den Vergleich von Eigenschaften der Lösung, die stark von der verwendeten räumlichen Auflösung abhängen. Im Fall einer aufsteigenden Warmluftblase ist der zusätzliche numerische Fehler durch die Verwendung der Adaptivität kleiner als 1% des gesamten numerischen Fehlers, wenn die adaptive Simulation mehr als 50% der Elemente einer uniformen hoch-aufgelösten Simulation verwendet. Entsprechend ist die adaptive Simulation fast doppelt so schnell wie die uniforme Simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit wird ein neuer Dynamikkern entwickelt und in das bestehendernnumerische Wettervorhersagesystem COSMO integriert. Für die räumlichernDiskretisierung werden diskontinuierliche Galerkin-Verfahren (DG-Verfahren)rnverwendet, für die zeitliche Runge-Kutta-Verfahren. Hierdurch ist ein Verfahrenrnhoher Ordnung einfach zu realisieren und es sind lokale Erhaltungseigenschaftenrnder prognostischen Variablen gegeben. Der hier entwickelte Dynamikkern verwendetrngeländefolgende Koordinaten in Erhaltungsform für die Orographiemodellierung undrnkoppelt das DG-Verfahren mit einem Kessler-Schema für warmen Niederschlag. Dabeirnwird die Fallgeschwindigkeit des Regens, nicht wie üblich implizit imrnKessler-Schema diskretisiert, sondern explizit im Dynamikkern. Hierdurch sindrndie Zeitschritte der Parametrisierung für die Phasenumwandlung des Wassers undrnfür die Dynamik vollständig entkoppelt, wodurch auch sehr große Zeitschritte fürrndie Parametrisierung verwendet werden können. Die Kopplung ist sowohl fürrnOperatoraufteilung, als auch für Prozessaufteilung realisiert.rnrnAnhand idealisierter Testfälle werden die Konvergenz und die globalenrnErhaltungseigenschaften des neu entwickelten Dynamikkerns validiert. Die Massernwird bis auf Maschinengenauigkeit global erhalten. Mittels Bergüberströmungenrnwird die Orographiemodellierung validiert. Die verwendete Kombination ausrnDG-Verfahren und geländefolgenden Koordinaten ermöglicht die Behandlung vonrnsteileren Bergen, als dies mit dem auf Finite-Differenzenverfahren-basierendenrnDynamikkern von COSMO möglich ist. Es wird gezeigt, wann die vollernTensorproduktbasis und wann die Minimalbasis vorteilhaft ist. Die Größe desrnEinflusses auf das Simulationsergebnis der Verfahrensordnung, desrnParametrisierungszeitschritts und der Aufteilungsstrategie wirdrnuntersucht. Zuletzt wird gezeigt dass bei gleichem Zeitschritt die DG-Verfahrenrnaufgrund der besseren Skalierbarkeit in der Laufzeit konkurrenzfähig zurnFinite-Differenzenverfahren sind.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Im Mittelpunkt dieser Arbeit steht Beweis der Existenz- und Eindeutigkeit von Quadraturformeln, die für das Qualokationsverfahren geeignet sind. Letzteres ist ein von Sloan, Wendland und Chandler entwickeltes Verfahren zur numerischen Behandlung von Randintegralgleichungen auf glatten Kurven (allgemeiner: periodische Pseudodifferentialgleichungen). Es erreicht die gleichen Konvergenzordnungen wie das Petrov-Galerkin-Verfahren, wenn man durch den Operator bestimmte Quadraturformeln verwendet. Zunächst werden die hier behandelten Pseudodifferentialoperatoren und das Qualokationsverfahren vorgestellt. Anschließend wird eine Theorie zur Existenz und Eindeutigkeit von Quadraturformeln entwickelt. Ein wesentliches Hilfsmittel hierzu ist die hier bewiesene Verallgemeinerung eines Satzes von Nürnberger über die Existenz und Eindeutigkeit von Quadraturformeln mit positiven Gewichten, die exakt für Tschebyscheff-Räume sind. Es wird schließlich gezeigt, dass es stets eindeutig bestimmte Quadraturformeln gibt, welche die in den Arbeiten von Sloan und Wendland formulierten Bedingungen erfüllen. Desweiteren werden 2-Punkt-Quadraturformeln für so genannte einfache Operatoren bestimmt, mit welchen das Qualokationsverfahren mit einem Testraum von stückweise konstanten Funktionen eine höhere Konvergenzordnung hat. Außerdem wird gezeigt, dass es für nicht-einfache Operatoren im Allgemeinen keine Quadraturformel gibt, mit der die Konvergenzordnung höher als beim Petrov-Galerkin-Verfahren ist. Das letzte Kapitel beinhaltet schließlich numerische Tests mit Operatoren mit konstanten und variablen Koeffizienten, welche die theoretischen Ergebnisse der vorangehenden Kapitel bestätigen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tethered bilayer lipid membranes (tBLMs) are a promising model system for the natural cell membrane. They consist of a lipid bilayer that is covalently coupled to a solid support via a spacer group. In this study, we developed a suitable approach to increase the submembrane space in tBLMs. The challenge is to create a membrane with a lower lipid density in order to increase the membrane fluidity, but to avoid defects that might appear due to an increase in the lateral space within the tethered monolayers. Therefore, various synthetic strategies and different monolayer preparation techniques were examined. Synthetical attempts to achieve a large ion reservoir were made in two directions: increasing the spacer length of the tether lipids and increasing the lateral distribution of the lipids in the monolayer. The first resulted in the synthesis of a small library of tether lipids (DPTT, DPHT and DPOT) characterized by 1H and 13C NMR, FD-MS, ATR, DSC and TGA. The synthetic strategy for their preparation includes synthesis of precursor with a double bond anchor that can be easily modified for different substrates (e.g. metal and metaloxide). Here, the double bond was modified into a thiol group suitable for gold surface. Another approach towards the preparation of homogeneous monolayers with decreased two-dimensional packing density was the synthesis of two novel anchor lipids: DPHDL and DDPTT. DPHDL is “self-diluted” tether lipid containing two lipoic anchor moieties. DDPTT has an extended lipophylic part that should lead to the preparation of diluted, leakage free proximal layers that will facilitate the completion of the bilayer. Our tool-box of tether lipids was completed with two fluorescent labeled lipid precursors with respectively one and two phytanyl chains in the hydrophobic region and a dansyl group as a fluorophore. The use of such fluorescently marked lipids is supposed to give additional information for the lipid distribution on the air-water interface. The Langmuir film balance was used to investigate the monolayer properties of four of the synthesized thiolated anchor lipids. The packing density and mixing behaviour were examined. The results have shown that mixing anchor with free lipids can homogeneously dilute the anchor lipid monolayers. Moreover, an increase in the hydrophylicity (PEG chain length) of the anchor lipids leads to a higher packing density. A decrease in the temperature results in a similar trend. However, increasing the number of phytanyl chains per lipid molecule is shown to decrease the packing density. LB-monolayers based on pure and mixed lipids in different ratio and transfer pressure were tested to form tBLMs with diluted inner layers. A combination of the LB-monolayer transfer with the solvent exchange method accomplished successfully the formation of tBLMs based on pure DPOT. Some preliminary investigations of the electrical sealing properties and protein incorporation of self-assembled DPOT and DDPTT-based tBLMs were conducted. The bilayer formation performed by solvent exchange resulted in membranes with high resistances and low capacitances. The appearance of space beneath the membrane is clearly visible in the impedance spectra expressed by a second RC element. The latter brings the conclusion that the longer spacer in DPOT and the bigger lateral space between the DDPTT molecules in the investigated systems essentially influence the electrical parameters of the membrane. Finally, we could show the functional incorporation of the small ion carrier valinomycin in both types of membranes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis concerns geochemical constraints on recycling and partial melting of Archean continental crust. A natural example of such processes was found in the Iisalmi area of Central Finland. The rocks from this area are Middle to Late Archean in age and experienced metamorphism and partial melting between 2.7-2.63 Ga. The work is based on extensive field work. It is furthermore founded on bulk rock geochemical data as well as in-situ analyses of minerals. All geochemical data were obtained at the Institute of Geosciences, University of Mainz using X-ray fluorescence, solution ICP-MS and laser ablation-ICP-MS for bulk rock geochemical analyses. Mineral analyses were accomplished by electron microprobe and laser ablation ICP-MS. Fluid inclusions were studied by microscope on a heating-freezing-stage at the Geoscience Center, University Göttingen. Part I focuses on the development of a new analytical method for bulk rock trace element determination by laser ablation-ICP-MS using homogeneous glasses fused from rock powder on an Iridium strip heater. This method is applicable for mafic rock samples whose melts have low viscosities and homogenize quickly at temperatures of ~1200°C. Highly viscous melts of felsic samples prevent melting and homogenization at comparable temperatures. Fusion of felsic samples can be enabled by addition of MgO to the rock powder and adjustment of melting temperature and melting duration to the rock composition. Advantages of the fusion method are low detection limits compared to XRF analyses and avoidance of wet-chemical processing and use of strong acids as in solution ICP-MS as well as smaller sample volumes compared to the other methods. Part II of the thesis uses bulk rock geochemical data and results from fluid inclusion studies for discrimination of melting processes observed in different rock types. Fluid inclusion studies demonstrate a major change in fluid composition from CO2-dominated fluids in granulites to aqueous fluids in TTG gneisses and amphibolites. Partial melts were generated in the dry, CO2-rich environment by dehydration melting reactions of amphibole which in addition to tonalitic melts produced the anhydrous mineral assemblages of granulites (grt + cpx + pl ± amph or opx + cpx + pl + amph). Trace element modeling showed that mafic granulites are residues of 10-30 % melt extraction from amphibolitic precursor rocks. The maximum degree of melting in intermediate granulites was ~10 % as inferred from modal abundances of amphibole, clinopyroxene and orthopyroxene. Carbonic inclusions are absent in upper-amphibolite facies migmatites whereas aqueous inclusion with up to 20 wt% NaCl are abundant. This suggests that melting within TTG gneisses and amphibolites took place in the presence of an aqueous fluid phase that enabled melting at the wet solidus at temperatures of 700-750°C. The strong disruption of pre-metamorphic structures in some outcrops suggests that the maximum amount of melt in TTG gneisses was ~25 vol%. The presence of leucosomes in all rock types is taken as the principle evidence for melt formation. However, mineralogical appearance as well as major and trace element composition of many leucosomes imply that leucosomes seldom represent frozen in-situ melts. They are better considered as remnants of the melt channel network, e.g. ways on which melts escaped from the system. Part III of the thesis describes how analyses of minerals from a specific rock type (granulite) can be used to determine partition coefficients between different minerals and between minerals and melt suitable for lower crustal conditions. The trace element analyses by laser ablation-ICP-MS show coherent distribution among the principal mineral phases independent of rock composition. REE contents in amphibole are about 3 times higher than REE contents in clinopyroxene from the same sample. This consistency has to be taken into consideration in models of lower crustal melting where amphibole is replaced by clinopyroxene in the course of melting. A lack of equilibrium is observed between matrix clinopyroxene / amphibole and garnet porphyroblasts which suggests a late stage growth of garnet and slow diffusion and equilibration of the REE during metamorphism. The data provide a first set of distribution coefficients of the transition metals (Sc, V, Cr, Ni) in the lower crust. In addition, analyses of ilmenite and apatite demonstrate the strong influence of accessory phases on trace element distribution. Apatite contains high amounts of REE and Sr while ilmenite incorporates about 20-30 times higher amounts of Nb and Ta than amphibole. Furthermore, trace element mineral analyses provide evidence for magmatic processes such as melt depletion, melt segregation, accumulation and fractionation as well as metasomatism having operated in this high-grade anatectic area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antibody microarrays are of great research interest because of their potential application as biosensors for high-throughput protein and pathogen screening technologies. In this active area, there is still a need for novel structures and assemblies providing insight in binding interactions such as spherical and annulus-shaped protein structures, e.g. for the utilization of curved surfaces for the enhanced protein-protein interactions and detection of antigens. Therefore, the goal of the presented work was to establish a new technique for the label-free detection of bio-molecules and bacteria on topographically structured surfaces, suitable for antibody binding.rnIn the first part of the presented thesis, the fabrication of monolayers of inverse opals with 10 μm diameter and the immobilization of antibodies on their interior surface is described. For this purpose, several established methods for the linking of antibodies to glass, including Schiff bases, EDC/S-NHS chemistry and the biotin-streptavidin affinity system, were tested. The employed methods included immunofluorescence and image analysis by phase contrast microscopy. It could be shown that these methods were not successful in terms of antibody immobilization and adjacent bacteria binding. Hence, a method based on the application of an active-ester-silane was introduced. It showed promising results but also the need for further analysis. Especially the search for alternative antibodies addressing other antigens on the exterior of bacteria will be sought-after in the future.rnAs a consequence of the ability to control antibody-functionalized surfaces, a new technique employing colloidal templating to yield large scale (~cm2) 2D arrays of antibodies against E. coli K12, eGFP and human integrin αvβ3 on a versatile useful glass surface is presented. The antibodies were swept to reside around the templating microspheres during solution drying, and physisorbed on the glass. After removing the microspheres, the formation of annuli-shaped antibody structures was observed. The preserved antibody structure and functionality is shown by binding the specific antigens and secondary antibodies. The improved detection of specific bacteria from a crude solution compared to conventional “flat” antibody surfaces and the setting up of an integrin-binding platform for targeted recognition and surface interactions of eukaryotic cells is demonstrated. The structures were investigated by atomic force, confocal and fluorescence microscopy. Operational parameters like drying time, temperature, humidity and surfactants were optimized to obtain a stable antibody structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nitrogen is an essential nutrient. It is for human, animal and plants a constituent element of proteins and nucleic acids. Although the majority of the Earth’s atmosphere consists of elemental nitrogen (N2, 78 %) only a few microorganisms can use it directly. To be useful for higher plants and animals elemental nitrogen must be converted to a reactive oxidized form. This conversion happens within the nitrogen cycle by free-living microorganisms, symbiotic living Rhizobium bacteria or by lightning. Humans are able to synthesize reactive nitrogen through the Haber-Bosch process since the beginning of the 20th century. As a result food security of the world population could be improved noticeably. On the other side the increased nitrogen input results in acidification and eutrophication of ecosystems and in loss of biodiversity. Negative health effects arose for humans such as fine particulate matter and summer smog. Furthermore, reactive nitrogen plays a decisive role at atmospheric chemistry and global cycles of pollutants and nutritive substances.rnNitrogen monoxide (NO) and nitrogen dioxide (NO2) belong to the reactive trace gases and are grouped under the generic term NOx. They are important components of atmospheric oxidative processes and influence the lifetime of various less reactive greenhouse gases. NO and NO2 are generated amongst others at combustion process by oxidation of atmospheric nitrogen as well as by biological processes within soil. In atmosphere NO is converted very quickly into NO2. NO2 is than oxidized to nitrate (NO3-) and to nitric acid (HNO3), which bounds to aerosol particles. The bounded nitrate is finally washed out from atmosphere by dry and wet deposition. Catalytic reactions of NOx are an important part of atmospheric chemistry forming or decomposing tropospheric ozone (O3). In atmosphere NO, NO2 and O3 are in photosta¬tionary equilibrium, therefore it is referred as NO-NO2-O3 triad. At regions with elevated NO concentrations reactions with air pollutions can form NO2, altering equilibrium of ozone formation.rnThe essential nutrient nitrogen is taken up by plants mainly by dissolved NO3- entering the roots. Atmospheric nitrogen is oxidized to NO3- within soil via bacteria by nitrogen fixation or ammonium formation and nitrification. Additionally atmospheric NO2 uptake occurs directly by stomata. Inside the apoplast NO2 is disproportionated to nitrate and nitrite (NO2-), which can enter the plant metabolic processes. The enzymes nitrate and nitrite reductase convert nitrate and nitrite to ammonium (NH4+). NO2 gas exchange is controlled by pressure gradients inside the leaves, the stomatal aperture and leaf resistances. Plant stomatal regulation is affected by climate factors like light intensity, temperature and water vapor pressure deficit. rnThis thesis wants to contribute to the comprehension of the effects of vegetation in the atmospheric NO2 cycle and to discuss the NO2 compensation point concentration (mcomp,NO2). Therefore, NO2 exchange between the atmosphere and spruce (Picea abies) on leaf level was detected by a dynamic plant chamber system under labo¬ratory and field conditions. Measurements took place during the EGER project (June-July 2008). Additionally NO2 data collected during the ECHO project (July 2003) on oak (Quercus robur) were analyzed. The used measuring system allowed simultaneously determina¬tion of NO, NO2, O3, CO2 and H2O exchange rates. Calculations of NO, NO2 and O3 fluxes based on generally small differences (∆mi) measured between inlet and outlet of the chamber. Consequently a high accuracy and specificity of the analyzer is necessary. To achieve these requirements a highly specific NO/NO2 analyzer was used and the whole measurement system was optimized to an enduring measurement precision.rnData analysis resulted in a significant mcomp,NO2 only if statistical significance of ∆mi was detected. Consequently, significance of ∆mi was used as a data quality criterion. Photo-chemical reactions of the NO-NO2-O3 triad in the dynamic plant chamber’s volume must be considered for the determination of NO, NO2, O3 exchange rates, other¬wise deposition velocity (vdep,NO2) and mcomp,NO2 will be overestimated. No significant mcomp,NO2 for spruce could be determined under laboratory conditions, but under field conditions mcomp,NO2 could be identified between 0.17 and 0.65 ppb and vdep,NO2 between 0.07 and 0.42 mm s-1. Analyzing field data of oak, no NO2 compensation point concentration could be determined, vdep,NO2 ranged between 0.6 and 2.71 mm s-1. There is increasing indication that forests are mainly a sink for NO2 and potential NO2 emissions are low. Only when assuming high NO soil emissions, more NO2 can be formed by reaction with O3 than plants are able to take up. Under these circumstance forests can be a source for NO2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rodents are most useful models to study physiological and pathophysiological processes in early development, because they are born in a relatively immature state. However, only few techniques are available to monitor non-invasively heart frequency and respiratory rate in neonatal rodents without restraining or hindering access to the animal. Here we describe experimental procedures that allow monitoring of heart frequency by electrocardiography (ECG) and breathing rate with a piezoelectric transducer (PZT) element without hindering access to the animal. These techniques can be easily installed and are used in the present study in unrestrained awake and anesthetized neonatal C57/Bl6 mice and Wistar rats between postnatal day 0 and 7. In line with previous reports from awake rodents we demonstrate that heart rate in rats and mice increases during the first postnatal week. Respiratory frequency did not differ between both species, but heart rate was significantly higher in mice than in rats. Further our data indicate that urethane, an agent that is widely used for anesthesia, induces a hypoventilation in neonates whilst heart rate remains unaffected at a dose of 1 g per kg body weight. Of note, hypoventilation induced by urethane was not detected in rats at postnatal 0/1. To verify the detected hypoventilation we performed blood gas analyses. We detected a respiratory acidosis reflected by a lower pH and elevated level in CO2 tension (pCO2) in both species upon urethane treatment. Furthermore we found that metabolism of urethane is different in P0/1 mice and rats and between P0/1 and P6/7 in both species. Our findings underline the usefulness of monitoring basic cardio-respiratory parameters in neonates during anesthesia. In addition our study gives information on developmental changes in heart and breathing frequency in newborn mice and rats and the effects of urethane in both species during the first postnatal week.