890 resultados para flat and curved layer slicing
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
During the last decade advances in the field of sensor design and improved base materials have pushed the radiation hardness of the current silicon detector technology to impressive performance. It should allow operation of the tracking systems of the Large Hadron Collider (LHC) experiments at nominal luminosity (1034 cm-2s-1) for about 10 years. The current silicon detectors are unable to cope with such an environment. Silicon carbide (SiC), which has recently been recognized as potentially radiation hard, is now studied. In this work it was analyzed the effect of high energy neutron irradiation on 4H-SiC particle detectors. Schottky and junction particle detectors were irradiated with 1 MeV neutrons up to fluence of 1016 cm-2. It is well known that the degradation of the detectors with irradiation, independently of the structure used for their realization, is caused by lattice defects, like creation of point-like defect, dopant deactivation and dead layer formation and that a crucial aspect for the understanding of the defect kinetics at a microscopic level is the correct identification of the crystal defects in terms of their electrical activity. In order to clarify the defect kinetic it were carried out a thermal transient spectroscopy (DLTS and PICTS) analysis of different samples irradiated at increasing fluences. The defect evolution was correlated with the transport properties of the irradiated detector, always comparing with the un-irradiated one. The charge collection efficiency degradation of Schottky detectors induced by neutron irradiation was related to the increasing concentration of defects as function of the neutron fluence.
Resumo:
[EN] 3D BEM-FEM coupling model is used to study the dynamic behavior of piled foundations in elastic layered soils in presenceof a rigid bedrock. Piles are modelled by FEM as beams according to the Bernoulli hpothesis, and every layer of the soil is modelled by BEM as a cointinuum, semi-infinite, isotropic, homogeneous, linear, viscoelastic medium.
Resumo:
The motivation for the work presented in this thesis is to retrieve profile information for the atmospheric trace constituents nitrogen dioxide (NO2) and ozone (O3) in the lower troposphere from remote sensing measurements. The remote sensing technique used, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS), is a recent technique that represents a significant advance on the well-established DOAS, especially for what it concerns the study of tropospheric trace consituents. NO2 is an important trace gas in the lower troposphere due to the fact that it is involved in the production of tropospheric ozone; ozone and nitrogen dioxide are key factors in determining the quality of air with consequences, for example, on human health and the growth of vegetation. To understand the NO2 and ozone chemistry in more detail not only the concentrations at ground but also the acquisition of the vertical distribution is necessary. In fact, the budget of nitrogen oxides and ozone in the atmosphere is determined both by local emissions and non-local chemical and dynamical processes (i.e. diffusion and transport at various scales) that greatly impact on their vertical and temporal distribution: thus a tool to resolve the vertical profile information is really important. Useful measurement techniques for atmospheric trace species should fulfill at least two main requirements. First, they must be sufficiently sensitive to detect the species under consideration at their ambient concentration levels. Second, they must be specific, which means that the results of the measurement of a particular species must be neither positively nor negatively influenced by any other trace species simultaneously present in the probed volume of air. Air monitoring by spectroscopic techniques has proven to be a very useful tool to fulfill these desirable requirements as well as a number of other important properties. During the last decades, many such instruments have been developed which are based on the absorption properties of the constituents in various regions of the electromagnetic spectrum, ranging from the far infrared to the ultraviolet. Among them, Differential Optical Absorption Spectroscopy (DOAS) has played an important role. DOAS is an established remote sensing technique for atmospheric trace gases probing, which identifies and quantifies the trace gases in the atmosphere taking advantage of their molecular absorption structures in the near UV and visible wavelengths of the electromagnetic spectrum (from 0.25 μm to 0.75 μm). Passive DOAS, in particular, can detect the presence of a trace gas in terms of its integrated concentration over the atmospheric path from the sun to the receiver (the so called slant column density). The receiver can be located at ground, as well as on board an aircraft or a satellite platform. Passive DOAS has, therefore, a flexible measurement configuration that allows multiple applications. The ability to properly interpret passive DOAS measurements of atmospheric constituents depends crucially on how well the optical path of light collected by the system is understood. This is because the final product of DOAS is the concentration of a particular species integrated along the path that radiation covers in the atmosphere. This path is not known a priori and can only be evaluated by Radiative Transfer Models (RTMs). These models are used to calculate the so called vertical column density of a given trace gas, which is obtained by dividing the measured slant column density to the so called air mass factor, which is used to quantify the enhancement of the light path length within the absorber layers. In the case of the standard DOAS set-up, in which radiation is collected along the vertical direction (zenith-sky DOAS), calculations of the air mass factor have been made using “simple” single scattering radiative transfer models. This configuration has its highest sensitivity in the stratosphere, in particular during twilight. This is the result of the large enhancement in stratospheric light path at dawn and dusk combined with a relatively short tropospheric path. In order to increase the sensitivity of the instrument towards tropospheric signals, measurements with the telescope pointing the horizon (offaxis DOAS) have to be performed. In this circumstances, the light path in the lower layers can become very long and necessitate the use of radiative transfer models including multiple scattering, the full treatment of atmospheric sphericity and refraction. In this thesis, a recent development in the well-established DOAS technique is described, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS). The MAX-DOAS consists in the simultaneous use of several off-axis directions near the horizon: using this configuration, not only the sensitivity to tropospheric trace gases is greatly improved, but vertical profile information can also be retrieved by combining the simultaneous off-axis measurements with sophisticated RTM calculations and inversion techniques. In particular there is a need for a RTM which is capable of dealing with all the processes intervening along the light path, supporting all DOAS geometries used, and treating multiple scattering events with varying phase functions involved. To achieve these multiple goals a statistical approach based on the Monte Carlo technique should be used. A Monte Carlo RTM generates an ensemble of random photon paths between the light source and the detector, and uses these paths to reconstruct a remote sensing measurement. Within the present study, the Monte Carlo radiative transfer model PROMSAR (PROcessing of Multi-Scattered Atmospheric Radiation) has been developed and used to correctly interpret the slant column densities obtained from MAX-DOAS measurements. In order to derive the vertical concentration profile of a trace gas from its slant column measurement, the AMF is only one part in the quantitative retrieval process. One indispensable requirement is a robust approach to invert the measurements and obtain the unknown concentrations, the air mass factors being known. For this purpose, in the present thesis, we have used the Chahine relaxation method. Ground-based Multiple AXis DOAS, combined with appropriate radiative transfer models and inversion techniques, is a promising tool for atmospheric studies in the lower troposphere and boundary layer, including the retrieval of profile information with a good degree of vertical resolution. This thesis has presented an application of this powerful comprehensive tool for the study of a preserved natural Mediterranean area (the Castel Porziano Estate, located 20 km South-West of Rome) where pollution is transported from remote sources. Application of this tool in densely populated or industrial areas is beginning to look particularly fruitful and represents an important subject for future studies.
Resumo:
During the previous 10 years, global R&D expenditure in the pharmaceuticals and biotechnology sector has steadily increased, without a corresponding increase in output of new medicines. To address this situation, the biopharmaceutical industry's greatest need is to predict the failures at the earliest possible stage of the drug development process. A major key to reducing failures in drug screenings is the development and use of preclinical models that are more predictive of efficacy and safety in clinical trials. Further, relevant animal models are needed to allow a wider testing of novel hypotheses. Key to this is the developing, refining, and validating of complex animal models that directly link therapeutic targets to the phenotype of disease, allowing earlier prediction of human response to medicines and identification of safety biomarkers. Morehover, well-designed animal studies are essential to bridge the gap between test in cell cultures and people. Zebrafish is emerging, complementary to other models, as a powerful system for cancer studies and drugs discovery. We aim to investigate this research area designing a new preclinical cancer model based on the in vivo imaging of zebrafish embryogenesis. Technological advances in imaging have made it feasible to acquire nondestructive in vivo images of fluorescently labeled structures, such as cell nuclei and membranes, throughout early Zebrafishsh embryogenesis. This In vivo image-based investigation provides measurements for a large number of features at cellular level and events including nuclei movements, cells counting, and mitosis detection, thereby enabling the estimation of more significant parameters such as proliferation rate, highly relevant for investigating anticancer drug effects. In this work, we designed a standardized procedure for accessing drug activity at the cellular level in live zebrafish embryos. The procedure includes methodologies and tools that combine imaging and fully automated measurements of embryonic cell proliferation rate. We achieved proliferation rate estimation through the automatic classification and density measurement of epithelial enveloping layer and deep layer cells. Automatic embryonic cells classification provides the bases to measure the variability of relevant parameters, such as cell density, in different classes of cells and is finalized to the estimation of efficacy and selectivity of anticancer drugs. Through these methodologies we were able to evaluate and to measure in vivo the therapeutic potential and overall toxicity of Dbait and Irinotecan anticancer molecules. Results achieved on these anticancer molecules are presented and discussed; furthermore, extensive accuracy measurements are provided to investigate the robustness of the proposed procedure. Altogether, these observations indicate that zebrafish embryo can be a useful and cost-effective alternative to some mammalian models for the preclinical test of anticancer drugs and it might also provides, in the near future, opportunities to accelerate the process of drug discovery.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
The development of safe, high energy and power electrochemical energy-conversion systems can be a response to the worldwide demand for a clean and low-fuel-consuming transport. This thesis work, starting from a basic studies on the ionic liquid (IL) electrolytes and carbon electrodes and concluding with tests on large-size IL-based supercapacitor prototypes demonstrated that the IL-based asymmetric configuration (AEDLCs) is a powerful strategy to develop safe, high-energy supercapacitors that might compete with lithium-ion batteries in power assist-hybrid electric vehicles (HEVs). The increase of specific energy in EDLCs was achieved following three routes: i) the use of hydrophobic ionic liquids (ILs) as electrolytes; ii) the design and preparation of carbon electrode materials of tailored morphology and surface chemistry to feature high capacitance response in IL and iii) the asymmetric double-layer carbon supercapacitor configuration (AEDLC) which consists of assembling the supercapacitor with different carbon loadings at the two electrodes in order to exploit the wide electrochemical stability window (ESW) of IL and to reach high maximum cell voltage (Vmax). Among the various ILs investigated the N-methoxyethyl-N-methylpyrrolidinium bis(trifluoromethanesulfonyl)imide (PYR1(2O1)TFSI) was selected because of its hydrophobicity and high thermal stability up to 350 °C together with good conductivity and wide ESW, exploitable in a wide temperature range, below 0°C. For such exceptional properties PYR1(2O1)TFSI was used for the whole study to develop large size IL-based carbon supercapacitor prototype. This work also highlights that the use of ILs determines different chemical-physical properties at the interface electrode/electrolyte with respect to that formed by conventional electrolytes. Indeed, the absence of solvent in ILs makes the properties of the interface not mediated by the solvent and, thus, the dielectric constant and double-layer thickness strictly depend on the chemistry of the IL ions. The study of carbon electrode materials evidences several factors that have to be taken into account for designing performing carbon electrodes in IL. The heat-treatment in inert atmosphere of the activated carbon AC which gave ACT carbon featuring ca. 100 F/g in IL demonstrated the importance of surface chemistry in the capacitive response of the carbons in hydrophobic ILs. The tailored mesoporosity of the xerogel carbons is a key parameter to achieve high capacitance response. The CO2-treated xerogel carbon X3a featured a high specific capacitance of 120 F/g in PYR14TFSI, however, exhibiting high pore volume, an excess of IL is required to fill the pores with respect to that necessary for the charge-discharge process. Further advances were achieved with electrodes based on the disordered template carbon DTC7 with pore size distribution centred at 2.7 nm which featured a notably high specific capacitance of 140 F/g in PYR14TFSI and a moderate pore volume, V>1.5 nm of 0.70 cm3/g. This thesis work demonstrated that by means of the asymmetric configuration (AEDLC) it was possible to reach high cell voltage up to 3.9 V. Indeed, IL-based AEDLCs with the X3a or ACT carbon electrodes exhibited specific energy and power of ca. 30 Wh/kg and 10 kW/kg, respectively. The DTC7 carbon electrodes, featuring a capacitance response higher of 20%-40% than those of X3a and ACT, respectively, enabled the development of a PYR14TFSI-based AEDLC with specific energy and power of 47 Wh/kg and 13 kW/kg at 60°C with Vmax of 3.9 V. Given the availability of the ACT carbon (obtained from a commercial material), the PYR1(2O1)TFSI-based AEDLCs assembled with ACT carbon electrodes were selected within the EU ILHYPOS project for the development of large-size prototypes. This study demonstrated that PYR1(2O1)TFSI-based AEDLC can operate between -30°C and +60°C and its cycling stability was proved at 60°C up to 27,000 cycles with high Vmax up to 3.8 V. Such AEDLC was further investigated following USABC and DOE FreedomCAR reference protocols for HEV to evaluate its dynamic pulse-power and energy features. It was demonstrated that with Vmax of 3.7 V at T> 30 °C the challenging energy and power targets stated by DOE for power-assist HEVs, and at T> 0 °C the standards for the 12V-TSS and 42V-FSS and TPA 2s-pulse applications are satisfied, if the ratio wmodule/wSC = 2 is accomplished, which, however, is a very demanding condition. Finally, suggestions for further advances in IL-based AEDLC performance were found. Particularly, given that the main contribution to the ESR is the electrode charging resistance, which in turn is affected by the ionic resistance in the pores that is also modulated by pore length, the pore geometry is a key parameter in carbon design not only because it defines the carbon surface but also because it can differentially “amplify” the effect of IL conductivity on the electrode charging-discharging process and, thus, supercapacitor time constant.
Resumo:
Aim of this thesis was the production of porosity-graded piezoelectric ceramics for ultrasonic applications by tape casting and screen printing. The study and optimization of each single step of the tape casting process allowed to produce flat and crack-free multilayers of Pb0.988(Zr0.52Ti0.48)0.976Nb0.024O3 (PZTN) of uniform and graded porosity. The multilayers of thickness ranging between 400 and 800 µm were produced stacking optimized green layers with different amount of pore former. The functionally graded materials showed porosity ranging between 10 and 30% with piezoelectric properties suitable for the specific ultrasonic applications. Screen printing inks of PZTN for deposition onto four different substrates were studied and optimized. Thick films with thickness ranging between 4 and 20 µm were produced tailoring the screen printing parameters and number of depositions.
Resumo:
Nach einer kurzen Einführung in die Entwicklung der magnetischen Anwendungen, werden in Kapitel 2 und 3 die physikalischen Grundlagen der Messmethoden, insbesondere die Rastertunnelspektroskopie und Kerr-Magnetometrie, sowie der experimentelle Aufbau erläutert. Kapitel 4 beschäftigt sich mit den magnetischen Eigenschaften von quasi ein-dimensionalen ferromagnetischen Nanostreifen und Monolagen, die durch Selbstorganisation auf einem Wolfram(110)-Einkristall mit vizinaler und glatter Oberfläche präpariert werden. Hierbei wird die Temperaturabhängigkeit der magnetischen Größen, wie Remanenz, Sättigungsmagnetisierung und Suszeptibilität, sowie die Auswirkung einer Abdeckung des Systems auf die Domänenwandenergie und Anisotropie untersucht. Zusätzlich wird die Kopplung von parallelen Nanostreifen in Abhängigkeit des Streifenabstandes betrachtet. In Kapitel 5 werden das Wachstum und die Morphologie von Co-Monolagen auf W(110) untersucht. Der Übergang von pseudomorphem zu dicht gepacktem Wachstum in der Monolage wird mit Hilfe der Rastertunnelspektroskopie sichtbar gemacht, ebenso wie unterschiedliche Stapelfolgen in Tripellagen Co-Systemen. Atomar aufgelöste Rastertunnelmikroskopie erlaubt die genauen Atompositionen der Oberfläche zu bestimmen und mit theoretischen Wachstumsmodellen zu vergleichen. Auf die Untersuchung zwei-dimensionaler binärer Co-Fe und Fe-Mn Legierungen auf W(110) wird in Kapitel 6 eingegangen. Mit einer Präparationstemperatur von T=520 K ist es möglich, atomar geordnete Co-Fe Legierungsmonolagen wachsen zu lassen. Ein direkter Zusammenhang zwischen der Magnetisierung und der lokalen Zustandsdichte in Abhängigkeit der Legierungszusammensetzung wird gezeigt.
Resumo:
Nel corso del mio lavoro di ricerca mi sono occupata di identificare strategie che permettano il risparmio delle risorse a livello edilizio e di approfondire un metodo per la valutazione ambientale di tali strategie. La convinzione di fondo è che bisogna uscire da una visione antropocentrica in cui tutto ciò che ci circonda è merce e materiale a disposizione dell’uomo, per entrare in una nuova era di equilibrio tra le risorse della terra e le attività che l’uomo esercita sul pianeta. Ho quindi affrontato il tema dell’edilizia responsabile approfondendo l’ambito delle costruzioni in balle di paglia e terra. Sono convinta che l’edilizia industriale abbia un futuro molto breve davanti a sé e lascerà inevitabilmente spazio a tecniche non convenzionali che coinvolgono materiali di semplice reperimento e posa in opera. Sono altresì convinta che il solo utilizzo di materiali naturali non sia garanzia di danni ridotti sull’ecosistema. Allo stesso tempo ritengo che una mera certificazione energetica non sia sinonimo di sostenibilità. Per questo motivo ho valutato le tecnologie non convenzionali con approccio LCA (Life Cycle Assessment), approfondendo gli impatti legati alla produzione, ai trasporti degli stessi, alla tipologia di messa in opera, e ai loro possibili scenari di fine vita. Inoltre ho approfondito il metodo di calcolo dei danni IMPACT, identificando una carenza nel sistema, che non prevede una categoria di danno legata alle modifiche delle condizioni idrogeologiche del terreno. La ricerca si è svolta attraverso attività pratiche e sperimentali in cantieri di edilizia non convenzionale e attività di ricerca e studio sull’LCA presso l’Enea di Bologna (Ing. Paolo Neri).
Resumo:
Ozon (O3) ist in der Atmosphäre ein wichtiges Oxidanz und Treibhausgas. Während die höchsten Konzentrationen in der Stratosphäre beobachtet werden und die vor der gefährlichen UV-Strahlung schützende Ozonschicht bilden, können sich signifikante Änderungen der Ozon-Konzentration in der Region der Tropopause auf das Klima der Erde auswirken. Des Weiteren ist Ozon eine der Hauptquellen für das Hydroxylradikal (OH) und nimmt damit entscheidend Einfluss auf die Oxidationskraft der Atmosphäre. Der konvektive Transport von Ozon und seinen Vorläufergasen aus Regionen nahe der Erdoberfläche in die freie Troposphäre beeinflusst das Budget dieser Spezies in der Tropopausenregion.rnDie Datengrundlage der Studie in der vorliegenden Arbeit basiert auf den flugzeuggetragenen Messkampagnen GABRIEL 2005 (Suriname, Südamerika) sowie HOOVER I 2006 und HOOVER II 2007 (beide in Europa). Mit dem zur Verfügung stehenden Datensatz wird das Ozonbudget in der freien, unbelasteten Hintergrundatmosphäre und in der durch hochreichende Konvektion gestörten, oberen Troposphäre untersucht. Anhand der auf in-situ Messungen von O3, NO, OH, HO2 und dem aktinischen Strahlungsfluss basierten Berechnung der Netto-Ozonproduktionsrate (NOPR) werden für das Messgebiet Ozontendenzen in der unbelasteten Troposphäre abgeleitet und mit Simulationen des globalen Chemie-Transport-Modells MATCH-MPIC verglichen. Mit Hilfe zweier Fallstudien in den Tropen in Südamerika und den mittleren Breiten in Europa werden die Auswirkungen von hochreichender Konvektion auf die obere Troposphäre quantifiziert.rnDie Ergebnisse zeigen für die Grenzschicht in niedrigen und mittleren Breiten eine eindeutige Tendenz zur Produktion von Ozon, was für den tropischen Regenwald in der Messregion nicht der allgemeinen Erwartung entsprach, nach der diese Region durch die Zerstörung von Ozon charakterisiert sein sollte. In der oberen Troposphäre ab etwa 7 km wird für die beiden Regionen eine leichte Tendenz zur Ozonproduktion beobachtet. Signifikante Unterschiede zeigen die Ergebnisse für die mittlere Troposphäre. Während die Tropen in dieser Region durch eine eindeutige Tendenz zur Zerstörung von Ozon charakterisiert sind, lässt sich über den mittleren Breiten zwar eine hohe photochemische Aktivität aber keine derart klare Tendenz feststellen. Die hohen Breiten zeichnen sich durch eine neutrale Troposphäre in Bezug auf die Ozontendenz aus und weisen kaum photochemische Aktivität auf. Der Vergleich dieser Ergebnisse mit dem MATCH-MPIC Modell zeigt in weiten Teilen der Messregionen eine grundlegende Übereinstimmung in der Tendenz zur Produktion oder Zerstörung von Ozon. Die absoluten Werte werden vom Modell aber generell unterschätzt. Signifikante Unterschiede zwischen in-situ Daten und Modellsimulationen werden in der Grenzschicht über dem tropischen Regenwald identifiziert.rnDer Einfluss der Konvektion ist durch eine signifikant erhöhte NOPR gekennzeichnet. In dieser Arbeit wird in den Tropen mit einem Median-Wert von 0.20 ppbv h−1 eine um den Faktor 3.6 erhöhte NOPR im Vergleich zur ungestörten oberen Troposphäre abgeschätzt. In den mittleren Breiten führt die um eine Größenordnung höhere NO-Konzentration zu einem Wert von 1.89 ppbv h−1, was einer Überhöhung um einen Faktor 6.5 im Vergleich zum ungestörten Zustand entspricht. Diese Ergebnisse zeigen für beide Regionen in der oberen Troposphäre eine erhöhte Ozonproduktion als Folge konvektiver Aktivität. rnrnHochreichende Konvektion ist zudem ein sehr effektiver Mechanismus für den Vertikaltransport aus der Grenzschicht in die obere Troposphäre. Die schnelle Hebung in konvektiven Wolken führt bei Spurengasen mit Quellen an der Erdoberfläche zu einer Erhöhung ihrer Konzentration in der oberen Troposphäre. Die hochgradig löslichen Spurenstoffe Formaldehyd (HCHO) und Wasserstoffperoxid (H2O2) sind wichtige Vorläufergase der HOx-Radikale. Es wird angenommen, dass sie aufgrund ihrer Löslichkeit in Gewitterwolken effektiv ausgewaschen werden.rnIn der vorliegenden Arbeit wird eine Fallstudie von hochreichender Konvektion im Rahmen des HOOVER II Projekts im Sommer 2007 analysiert. Am 19.07.2007 entwickelten sich am Nachmittag am Südostrand eines in nordöstlicher Richtung ziehenden mesoskaligen konvektiven Systems drei zunächst isolierte konvektive Zellen. Flugzeuggetragene Messungen in der Aus- und der Einströmregion einer dieser Gewitterzellen stellen einen exzellenten Datensatz bereit, um die Auswirkungen von hochreichender Konvektion auf die Verteilung verschiedener Spurengase in der oberen Troposphäre zu untersuchen. Der Vergleich der Konzentrationen von Kohlenstoffmonoxid (CO) und Methan (CH4) zwischen der oberen Troposphäre und der Grenzschicht deutet auf einen nahezu unverdünnten Transport dieser langlebigen Spezies in der konvektiven Zelle hin. Die Verhältnisse betragen (0.94±0.04) für CO und (0.99±0.01) für CH4. Für die löslichen Spezies HCHO und H2O2 beträgt dieses Verhältnis in der Ausströmregion (0.55±0.09) bzw. (0.61±0.08). Dies ist ein Indiz dafür, dass diese Spezies nicht so effektiv ausgewaschen werden wie angenommen. Zum besseren Verständnis des Einflusses der Konvektion auf die Budgets dieser Spezies in der oberen Troposphäre wurden im Rahmen dieser Arbeit Boxmodell-Studien für den Beitrag der photochemischen Produktion in der Ausströmregion durchgeführt, wobei die gemessenen Spezies und Photolysefrequenzen als Randbedingungen dienten. Aus den Budgetbetrachtungen für HCHO und H2O2 wird eine Auswascheffizienz von (67±24) % für HCHO und (41±18) % für H2O2 abgeschätzt. Das für H2O2 überraschende Ergebnis lässt darauf schließen, dass dieses Molekül in einer Gewitterwolke deutlich effektiver transportiert werden kann, als aufgrund seiner hohen Löslichkeit aus der Henry-Konstanten zu erwarten wäre. Das Ausgasen von gelöstem H2O2 beim Gefrieren eines Wolkentropfens, d.h. ein Retentionskoeffizient kleiner 1, ist ein möglicher Mechanismus, der zum beobachteten Mischungsverhältnis dieser löslichen Spezies in der Ausströmregion beitragen kann.
Resumo:
Self-assembled molecular structures were investigated on insulating substrate surfaces using non-contact atomic force microscopy. Both, substrate preparation and molecule deposition, took place under ultra-high vacuum conditions. First, C60 molecules were investigated on the TiO2 (110) surface. This surface exhibits parallel running troughs at the nanometer scale, which strongly steer the assembly of the molecules. This is in contrast to the second investigated surface. The CaF2 (111) surface is atomically flat and the molecular assemblyrnwas observed to be far less affected by the surface. Basically different island structures were observed to what is typically know. Based on extensive experimental studies and theoretical considerations, a comprehensive picture of the processes responsible for the island formation of C60 molecules on this insulating surfaces was developed. The key process for the emergence of the observed novel island structures was made out to be the dewetting of molecules from the substrate. This new knowledge allows to further understand andrnexploit self-assembly techniques in structure fabrication on insulating substrate surfaces. To alter island formation and island structure, C60 molecules were codeposited with second molecule species (PTCDI and SubPc) on the CaF2 (111) surface. Depending on the order of deposition, quiet different structures were observed to arise. Thus, these are the first steps towards more complex functional arrangements consisting of two molecule species on insulating surfaces.
Resumo:
The space environment has always been one of the most challenging for communications, both at physical and network layer. Concerning the latter, the most common challenges are the lack of continuous network connectivity, very long delays and relatively frequent losses. Because of these problems, the normal TCP/IP suite protocols are hardly applicable. Moreover, in space scenarios reliability is fundamental. In fact, it is usually not tolerable to lose important information or to receive it with a very large delay because of a challenging transmission channel. In terrestrial protocols, such as TCP, reliability is obtained by means of an ARQ (Automatic Retransmission reQuest) method, which, however, has not good performance when there are long delays on the transmission channel. At physical layer, Forward Error Correction Codes (FECs), based on the insertion of redundant information, are an alternative way to assure reliability. On binary channels, when single bits are flipped because of channel noise, redundancy bits can be exploited to recover the original information. In the presence of binary erasure channels, where bits are not flipped but lost, redundancy can still be used to recover the original information. FECs codes, designed for this purpose, are usually called Erasure Codes (ECs). It is worth noting that ECs, primarily studied for binary channels, can also be used at upper layers, i.e. applied on packets instead of bits, offering a very interesting alternative to the usual ARQ methods, especially in the presence of long delays. A protocol created to add reliability to DTN networks is the Licklider Transmission Protocol (LTP), created to obtain better performance on long delay links. The aim of this thesis is the application of ECs to LTP.
Resumo:
Die vorliegende Arbeit untersucht die Struktur und Zusammensetzung der untersten Atmosphäre im Rahmen der PARADE-Messkampagne (PArticles and RAdicals: Diel observations of the impact of urban and biogenic Emissions) am Kleinen Feldberg in Deutschland im Spätsommer 2011. Dazu werden Messungen von meteorologischen Grundgrößen (Temperatur, Feuchte, Druck, Windgeschwindigkeit und -richtung) zusammen mit Radiosonden und flugzeuggetragenen Messungen von Spurengasen (Kohlenstoffmonoxid, -dioxid, Ozon und Partikelanzahlkonzentrationen) ausgewertet. Ziel ist es, mit diesen Daten, die thermodynamischen und dynamischen Eigenschaften und deren Einfluss auf die chemische Luftmassenzusammensetzung in der planetaren Grenzschicht zu bestimmen. Dazu werden die Radiosonden und Flugzeugmessungen mit Lagrangeschen Methoden kombiniert und es wird zwischen rein kinematischen Modellen (LAGRANTO und FLEXTRA) sowie sogenannten Partikeldispersionsmodellen (FLEXPART) unterschieden. Zum ersten Mal wurde im Rahmen dieser Arbeit dabei auch eine Version von FLEXPART-COSMO verwendet, die von den meteorologischen Analysefeldern des Deutschen Wetterdienstes angetrieben werden. Aus verschiedenen bekannten Methoden der Grenzschichthöhenbestimmung mit Radiosondenmessungen wird die Bulk-Richardson-Zahl-Methode als Referenzmethode verwendet, da sie eine etablierte Methode sowohl für Messungen und als auch Modellanalysen darstellt. Mit einer Toleranz von 125 m, kann zu 95 % mit mindestens drei anderen Methoden eine Übereinstimmung zu der ermittelten Grenzschichthöhe festgestellt werden, was die Qualität der Grenzschichthöhe bestätigt. Die Grenzschichthöhe variiert während der Messkampagne zwischen 0 und 2000 m über Grund, wobei eine hohe Grenzschicht nach dem Durchzug von Kaltfronten beobachtet wird, hingegen eine niedrige Grenzschicht unter Hochdruckeinfluss und damit verbundener Subsidenz bei windarmen Bedingungen im Warmsektor. Ein Vergleich zwischen den Grenzschichthöhen aus Radiosonden und aus Modellen (COSMO-DE, COSMO-EU, COSMO-7) zeigt nur geringe Unterschiede um -6 bis +12% während der Kampagne am Kleinen Feldberg. Es kann allerdings gezeigt werden, dass in größeren Simulationsgebieten systematische Unterschiede zwischen den Modellen (COSMO-7 und COSMO-EU) auftreten. Im Rahmen dieser Arbeit wird deutlich, dass die Bodenfeuchte, die in diesen beiden Modellen unterschiedlich initialisiert wird, zu verschiedenen Grenzschichthöhen führt. Die Folge sind systematische Unterschiede in der Luftmassenherkunft und insbesondere der Emissionssensitivität. Des Weiteren kann lokale Mischung zwischen der Grenzschicht und der freien Troposphäre bestimmt werden. Dies zeigt sich in der zeitlichen Änderung der Korrelationen zwischen CO2 und O3 aus den Flugzeugmessungen, und wird im Vergleich mit Rückwärtstrajektorien und Radiosondenprofilen bestärkt. Das Einmischen der Luftmassen in die Grenzschicht beeinflusst dabei die chemische Zusammensetzung in der Vertikalen und wahrscheinlich auch am Boden. Diese experimentelle Studie bestätigt die Relevanz der Einmischungsprozesse aus der freien Troposphäre und die Verwendbarkeit der Korrelationsmethode, um Austausch- und Einmischungsprozesse an dieser Grenzfläche zu bestimmen.
Resumo:
The Greenland NGRIP ice core continuously covers the period from present day back to 123 ka before present, which includes several thousand years of ice from the previous interglacial period, MIS 5e or the Eemian. In the glacial part of the core, annual layers can be identified from impurity records and visual stratigraphy, and stratigraphic layer counting has been performed back to 60 ka. In the deepest part of the core, however, the ice is close to the pressure melting point, the visual stratigraphy is dominated by crystal boundaries, and annual layering is not visible to the naked eye. In this study, we apply a newly developed setup for high-resolution ice core impurity analysis to produce continuous records of dust, sodium and ammonium concentrations as well as conductivity of melt water. We analyzed three 2.2 m sections of ice from the Eemian and the glacial inception. In all of the analyzed ice, annual layers can clearly be recognized, most prominently in the dust and conductivity profiles. Part of the samples is, however, contaminated in dust, most likely from drill liquid. It is interesting that the annual layering is preserved despite a very active crystal growth and grain boundary migration in the deep and warm NGRIP ice. Based on annual layer counting of the new records, we determine a mean annual layer thickness close to 11 mm for all three sections, which, to first order, confirms the modeled NGRIP time scale (ss09sea). The counting does, however, suggest a longer duration of the climatically warmest part of the NGRIP record (MIS5e) of up to 1 ka as compared to the model estimate. Our results suggest that stratigraphic layer counting is possible basically throughout the entire NGRIP ice core, provided sufficiently highly-resolved profiles become available.