552 resultados para Shield-emblem
Resumo:
The objective of this work is to predict the temperature distribution of partially submersed umbilical cables under different operating and environmental conditions. The commercial code Fluent® was used to simulate the heat transfer and the air fluid flow of part of a vertical umbilical cable near the air-water interface. A free-convective three-dimensional turbulent flow in open-ended vertical annuli was solved. The influence of parameters such as the heat dissipating rate, wind velocity, air temperature and solar radiation was analyzed. The influence of the presence of a radiation shield consisting of a partially submersed cylindrical steel tube was also considered. The air flow and the buoyancydriven convective heat transfer in the annular region between the steel tube and the umbilical cable were calculated using the standard k-ε turbulence model. The radiative heat transfer between the umbilical external surface and the radiation shield was calculated using the Discrete Ordinates model. The results indicate that the influence of a hot environment and intense solar radiation may affect the umbilical cable performance in its dry portion.
Resumo:
[ES] Los deslizamientos gravitatorios implicando volúmenes relativamente reducidos (millones de m3) son muy frecuentes, no así los que afectan a decenas, centenares e incluso miles de km3. Estos deslizamientos gigantes o megadeslizamientos son especialmente importantes y frecuentes en las islas oceánicas, particularmente en sus primeras etapas de desarrollo en escudo. Fueron descubiertos en las Islas Hawaii, donde alcanzan volúmenes ?prodigiosos? de miles de km3, pero es en las Canarias donde, a pesar de su menor volumen, son particularmente espectaculares y donde han sido mejor estudiados, tanto en sus etapas pre- y post-colapso en tierra, como las características y extensión de sus depósitos de avalancha en los fondos marinos. Los megadeslizamientos no sólo son procesos muy importantes en el desarrollo de las islas oceánicas y en sus riesgos naturales, sino que influyen en su variabilidad petrológica y aportan importantes recursos paisajísticos en forma de espectaculares valles y calderas
Resumo:
[ES] El presente TFG tiene por objetivo el desarrollo de una librería que permita al usuario controlar de forma sencilla una red de microcontroladores. Como protocolo de comunicación sobre el que trabajar se ha utilizado el bus CAN, que proporciona una capa para el control de errores, configuración del ancho de banda, gestión de prioridades y protocolo de mensajes. Como resultado al proyecto, se obtiene la librería TouCAN en la cual se establecen dos partes diferenciadas, el lado microcontrolador y el lado supervisor. Cada una de estas partes se desarrollará en un TFG distinto, siendo el lado supervisor el correspondiente a este TFG. El lado microcontrolador se apoyará sobre la plataforma Arduino. En esta parte, se desarrollará la capacidad de conectar diferentes dispositivos de la red de microcontroladores entre sí, definiendo para ello un protocolo de comunicación que permita la realización de comunicaciones síncronas y asíncronas entre los distintos dispositivos de la red. Para dotar al arduino de la capacidad de hacer uso del protocolo bus CAN, se utilizará un Shield destinado a tal fin. El objetivo del supervisor será la integración de la red de microcontroladores con dispositivos de propósito general, tales como un ordenador personal, que permita realizar tareas de control y monitorización de los distintos sistemas empotrados situados en la red. Como sistema operativo utilizado en la elaboración de la librería se utilizó una distribución GNU/Linux. Para la comunicación del dispositivo supervisor con la red de microcontroladores se utilizará el puerto serie disponible en la plataforma Arduino.
Resumo:
[ES] Este trabajo detalla las diferentes etapas del desarrollo de construcción del prototipo Clever Socket, abarcando desde su análisis, a su diseño e implementación.
Clever Socket es un conjunto hardware y software que facilita el control del encendido y el apagado de los aparatos electrónicos conectados a él a través de redes de comunicación LAN y WAN. Para lograr este propósito, el artefacto se apoya en la placa de desarrollo electrónico Arduino UNO y la placa Arduino WiFi Shield –que posibilita su conexión inalámbrica a la red–.
El dispositivo cuenta con cuatro bases de enchufe controladas a través de relés, que permiten la gestión de hasta cuatro dispositivos electrónicos de manera simultánea.
En el lado del software se presentan tres componentes:
Un servicio web de bajo nivel destinado a la gestión de la placa Arduino. Permite la comunicación con la placa, la lectura y escritura de sus pines.
Un servicio web específico del dispositivo Clever Socket. Facilita la comunicación con el prototipo hardware, permitiendo así la gestión de diferentes operaciones que se llevan a cabo sobre los aparatos conectados a sus enchufes.
Una aplicación web. Actúa como interfaz entre el usuario y el servicio web de Clever Socket, permitiendo la gestión del dispositivo. El diseño de la aplicación es responsive, lo que posibilita su correcta visualización en todo tipo de dispositivos, tanto móviles como de escritorio.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
This PhD thesis discusses the rationale for design and use of synthetic oligosaccharides for the development of glycoconjugate vaccines and the role of physicochemical methods in the characterization of these vaccines. The study concerns two infectious diseases that represent a serious problem for the national healthcare programs: human immunodeficiency virus (HIV) and Group A Streptococcus (GAS) infections. Both pathogens possess distinctive carbohydrate structures that have been described as suitable targets for the vaccine design. The Group A Streptococcus cell membrane polysaccharide (GAS-PS) is an attractive vaccine antigen candidate based on its conserved, constant expression pattern and the ability to confer immunoprotection in a relevant mouse model. Analysis of the immunogenic response within at-risk populations suggests an inverse correlation between high anti-GAS-PS antibody titres and GAS infection cases. Recent studies show that a chemically synthesized core polysaccharide-based antigen may represent an antigenic structural determinant of the large polysaccharide. Based on GAS-PS structural analysis, the study evaluates the potential to exploit a synthetic design approach to GAS vaccine development and compares the efficiency of synthetic antigens with the long isolated GAS polysaccharide. Synthetic GAS-PS structural analogues were specifically designed and generated to explore the impact of antigen length and terminal residue composition. For the HIV-1 glycoantigens, the dense glycan shield on the surface of the envelope protein gp120 was chosen as a target. This shield masks conserved protein epitopes and facilitates virus spread via binding to glycan receptors on susceptible host cells. The broadly neutralizing monoclonal antibody 2G12 binds a cluster of high-mannose oligosaccharides on the gp120 subunit of HIV-1 Env protein. This oligomannose epitope has been a subject to the synthetic vaccine development. The cluster nature of the 2G12 epitope suggested that multivalent antigen presentation was important to develop a carbohydrate based vaccine candidate. I describe the development of neoglycoconjugates displaying clustered HIV-1 related oligomannose carbohydrates and their immunogenic properties.
Resumo:
Fra le varie ragioni della crescente pervasività di Internet in molteplici settori di mercato del tutto estranei all’ICT, va senza dubbio evidenziata la possibilità di creare canali di comunicazione attraverso i quali poter comandare un sistema e ricevere da esso informazioni di qualsiasi genere, qualunque distanza separi controllato e controllore. Nel caso specifico, il contesto applicativo è l’automotive: in collaborazione col Dipartimento di Ingegneria Elettrica dell’Università di Bologna, ci si è occupati del problema di rendere disponibile a distanza la grande quantità di dati che i vari sotto-sistemi componenti una automobile elettrica si scambiano fra loro, sia legati al tipo di propulsione, elettrico appunto, come i livelli di carica delle batterie o la temperatura dell’inverter, sia di natura meccanica, come i giri motore. L’obiettivo è quello di permettere all’utente (sia esso il progettista, il tecnico riparatore o semplicemente il proprietario) il monitoraggio e la supervisione dello stato del mezzo da remoto nelle sue varie fasi di vita: dai test eseguiti su prototipo in laboratorio, alla messa in strada, alla manutenzione ordinaria e straordinaria. L’approccio individuato è stato quello di collezionare e memorizzare in un archivio centralizzato, raggiungibile via Internet, tutti i dati necessari. Il sistema di elaborazione a bordo richiede di essere facilmente integrabile, quindi di piccole dimensioni, e a basso costo, dovendo prevedere la produzione di molti veicoli; ha inoltre compiti ben definiti e noti a priori. Data la situazione, si è quindi scelto di usare un sistema embedded, cioè un sistema elettronico di elaborazione progettato per svolgere un limitato numero di funzionalità specifiche sottoposte a vincoli temporali e/o economici. Apparati di questo tipo sono denominati “special purpose”, in opposizione ai sistemi di utilità generica detti “general purpose” quali, ad esempio, i personal computer, proprio per la loro capacità di eseguire ripetutamente un’azione a costo contenuto, tramite un giusto compromesso fra hardware dedicato e software, chiamato in questo caso “firmware”. I sistemi embedded hanno subito nel corso del tempo una profonda evoluzione tecnologica, che li ha portati da semplici microcontrollori in grado di svolgere limitate operazioni di calcolo a strutture complesse in grado di interfacciarsi a un gran numero di sensori e attuatori esterni oltre che a molte tecnologie di comunicazione. Nel caso in esame, si è scelto di affidarsi alla piattaforma open-source Arduino; essa è composta da un circuito stampato che integra un microcontrollore Atmel da programmare attraverso interfaccia seriale, chiamata Arduino board, ed offre nativamente numerose funzionalità, quali ingressi e uscite digitali e analogici, supporto per SPI, I2C ed altro; inoltre, per aumentare le possibilità d’utilizzo, può essere posta in comunicazione con schede elettroniche esterne, dette shield, progettate per le più disparate applicazioni, quali controllo di motori elettrici, gps, interfacciamento con bus di campo quale ad esempio CAN, tecnologie di rete come Ethernet, Bluetooth, ZigBee, etc. L’hardware è open-source, ovvero gli schemi elettrici sono liberamente disponibili e utilizzabili così come gran parte del software e della documentazione; questo ha permesso una grande diffusione di questo frame work, portando a numerosi vantaggi: abbassamento del costo, ambienti di sviluppo multi-piattaforma, notevole quantità di documentazione e, soprattutto, continua evoluzione ed aggiornamento hardware e software. È stato quindi possibile interfacciarsi alla centralina del veicolo prelevando i messaggi necessari dal bus CAN e collezionare tutti i valori che dovevano essere archiviati. Data la notevole mole di dati da elaborare, si è scelto di dividere il sistema in due parti separate: un primo nodo, denominato Master, è incaricato di prelevare dall’autovettura i parametri, di associarvi i dati GPS (velocità, tempo e posizione) prelevati al momento della lettura e di inviare il tutto a un secondo nodo, denominato Slave, che si occupa di creare un canale di comunicazione attraverso la rete Internet per raggiungere il database. La denominazione scelta di Master e Slave riflette la scelta fatta per il protocollo di comunicazione fra i due nodi Arduino, ovvero l’I2C, che consente la comunicazione seriale fra dispositivi attraverso la designazione di un “master” e di un arbitrario numero di “slave”. La suddivisione dei compiti fra due nodi permette di distribuire il carico di lavoro con evidenti vantaggi in termini di affidabilità e prestazioni. Del progetto si sono occupate due Tesi di Laurea Magistrale; la presente si occupa del dispositivo Slave e del database. Avendo l’obiettivo di accedere al database da ovunque, si è scelto di appoggiarsi alla rete Internet, alla quale si ha oggi facile accesso da gran parte del mondo. Questo ha fatto sì che la scelta della tecnologia da usare per il database ricadesse su un web server che da un lato raccoglie i dati provenienti dall’autovettura e dall’altro ne permette un’agevole consultazione. Anch’esso è stato implementato con software open-source: si tratta, infatti, di una web application in linguaggio php che riceve, sotto forma di richieste HTTP di tipo GET oppure POST, i dati dal dispositivo Slave e provvede a salvarli, opportunamente formattati, in un database MySQL. Questo impone però che, per dialogare con il web server, il nodo Slave debba implementare tutti i livelli dello stack protocollare di Internet. Due differenti shield realizzano quindi il livello di collegamento, disponibile sia via cavo sia wireless, rispettivamente attraverso l’implementazione in un caso del protocollo Ethernet, nell’altro della connessione GPRS. A questo si appoggiano i protocolli TCP/IP che provvedono a trasportare al database i dati ricevuti dal dispositivo Master sotto forma di messaggi HTTP. Sono descritti approfonditamente il sistema veicolare da controllare e il sistema controllore; i firmware utilizzati per realizzare le funzioni dello Slave con tecnologia Ethernet e con tecnologia GPRS; la web application e il database; infine, sono presentati i risultati delle simulazioni e dei test svolti sul campo nel laboratorio DIE.
Resumo:
The Eifel volcanism is part of the Central European Volcanic Province (CEVP) and is located in the Rhenish Massif, close to the Rhine and Leine Grabens. The Quaternary Eifel volcanism appears to be related to a mantle plume activity. However, the causes of the Tertiary Hocheifel volcanism remain debated. We present geochronological, geochemical and isotope data to assess the geotectonic settings of the Tertiary Eifel volcanism. Based on 40Ar/39Ar dating, we were able to identify two periods in the Hocheifel activity: from 43.6 to 39.0 Ma and from 37.5 to 35.0 Ma. We also show that the pre-rifting volcanism in the northernmost Upper Rhine Graben (59 to 47 Ma) closely precede the Hocheifel volcanic activity. In addition, the volcanism propagates from south to north within the older phase of the Hocheifel activity. At the time of Hocheifel volcanism, the tectonic activity in the Hocheifel was controlled by stress field conditions identical to those of the Upper Rhine Graben. Therefore, magma generation in the Hocheifel appears to be caused by decompression due to Middle to Late Eocene extension. Our geochemical data indicate that the Hocheifel magmas were produced by partial melting of a garnet peridotite at 75-90 km depth. We also show that crustal contamination is minor although the magmas erupted through a relatively thick continental lithosphere. Sr, Nd and Pb isotopic compositions suggest that the source of the Hocheifel magmas is a mixing between depleted FOZO or HIMU-like material and enriched EM2-like material. The Tertiary Hocheifel and the Quaternary Eifel lavas appear to have a common enriched end-member. However, the other sources are likely to be distinct. In addition, the Hocheifel lavas share a depleted component with the other Tertiary CEVP lavas. Although the Tertiary Hocheifel and the Quaternary Eifel lavas appear to originate from different sources, the potential involvement of a FOZO-like component would indicate the contribution of deep mantle material. Thus, on the basis of the geochemical and isotope data, we cannot rule out the involvement of plume-type material in the Hocheifel magmas. The Ko’olau Scientific Drilling Project (KSDP) has been initiated in order to evaluate the long-term evolution of Ko’olau volcano and obtain information about the Hawaiian mantle plume. High precision Pb triple spike data, as well as Sr and Nd isotope data on KSDP lavas and Honolulu Volcanics (HVS) reveal compositional source variations during Ko’olau growth. Pb isotopic compositions indicate that, at least, three Pb end-members are present in Ko’olau lavas. Changes in the contributions of each component are recorded in the Pb, Sr and Nd isotopes stratigraphy. The radiogenic component is present, at variable proportion, in all three stages of Ko’olau growth. It shows affinities with the least radiogenic “Kea-lo8” lavas present in Mauna Kea. The first unradiogenic component was present in the main-shield stage of Ko’olau growth but its contribution decreased with time. It has EM1 type characteristics and corresponds to the “Ko’olau” component of Hawaiian mantle plume. The second unradiogenic end-member, so far only sampled by Honololu lavas, has isotopic characteristics similar to those of a depleted mantle. However, they are different from those of the recent Pacific lithosphere (EPR MORB) indicating that the HVS are not derived from MORB-related source. We suggest, instead, that the HVS result from melting of a plume material. Thus the evolution of a single Hawaiian volcano records the geochemical and isotopic changes within the Hawaiian plume.
Resumo:
The research has included the efforts in designing, assembling and structurally and functionally characterizing supramolecular biofunctional architectures for optical biosensing applications. In the first part of the study, a class of interfaces based on the biotin-NeutrAvidin binding matrix for the quantitative control of enzyme surface coverage and activity was developed. Genetically modified ß-lactamase was chosen as a model enzyme and attached to five different types of NeutrAvidin-functionalized chip surfaces through a biotinylated spacer. All matrices are suitable for achieving a controlled enzyme surface density. Data obtained by SPR are in excellent agreement with those derived from optical waveguide measurements. Among the various protein-binding strategies investigated in this study, it was found that stiffness and order between alkanethiol-based SAMs and PEGylated surfaces are very important. Matrix D based on a Nb2O5 coating showed a satisfactory regeneration possibility. The surface-immobilized enzymes were found to be stable and sufficiently active enough for a catalytic activity assay. Many factors, such as the steric crowding effect of surface-attached enzymes, the electrostatic interaction between the negatively charged substrate (Nitrocefin) and the polycationic PLL-g-PEG/PEG-Biotin polymer, mass transport effect, and enzyme orientation, are shown to influence the kinetic parameters of catalytic analysis. Furthermore, a home-built Surface Plasmon Resonance Spectrometer of SPR and a commercial miniature Fiber Optic Absorbance Spectrometer (FOAS), served as a combination set-up for affinity and catalytic biosensor, respectively. The parallel measurements offer the opportunity of on-line activity detection of surface attached enzymes. The immobilized enzyme does not have to be in contact with the catalytic biosensor. The SPR chip can easily be cleaned and used for recycling. Additionally, with regard to the application of FOAS, the integrated SPR technique allows for the quantitative control of the surface density of the enzyme, which is highly relevant for the enzymatic activity. Finally, the miniaturized portable FOAS devices can easily be combined as an add-on device with many other in situ interfacial detection techniques, such as optical waveguide lightmode spectroscopy (OWLS), the quartz crystal microbalance (QCM) measurements, or impedance spectroscopy (IS). Surface plasmon field-enhanced fluorescence spectroscopy (SPFS) allows for an absolute determination of intrinsic rate constants describing the true parameters that control interfacial hybridization. Thus it also allows for a study of the difference of the surface coupling influences between OMCVD gold particles and planar metal films presented in the second part. The multilayer growth process was found to proceed similarly to the way it occurs on planar metal substrates. In contrast to planar bulk metal surfaces, metal colloids exhibit a narrow UV-vis absorption band. This absorption band is observed if the incident photon frequency is resonant with the collective oscillation of the conduction electrons and is known as the localized surface plasmon resonance (LSPR). LSPR excitation results in extremely large molar extinction coefficients, which are due to a combination of both absorption and scattering. When considering metal-enhanced fluorescence we expect the absorption to cause quenching and the scattering to cause enhancement. Our further study will focus on the developing of a detection platform with larger gold particles, which will display a dominant scattering component and enhance the fluorescence signal. Furthermore, the results of sequence-specific detection of DNA hybridization based on OMCVD gold particles provide an excellent application potential for this kind of cheap, simple, and mild preparation protocol applied in this gold fabrication method. In the final chapter, SPFS was used for the in-depth characterizations of the conformational changes of commercial carboxymethyl dextran (CMD) substrate induced by pH and ionic strength variations were studied using surface plasmon resonance spectroscopy. The pH response of CMD is due to the changes in the electrostatics of the system between its protonated and deprotonated forms, while the ionic strength response is attributed from the charge screening effect of the cations that shield the charge of the carboxyl groups and prevent an efficient electrostatic repulsion. Additional studies were performed using SPFS with the aim of fluorophore labeling the carboxymethyl groups. CMD matrices showed typical pH and ionic strength responses, such as high pH and low ionic strength swelling. Furthermore, the effects of the surface charge and the crosslink density of the CMD matrix on the extent of stimuli responses were investigated. The swelling/collapse ratio decreased with decreasing surface concentration of the carboxyl groups and increasing crosslink density. The study of the CMD responses to external and internal variables will provide valuable background information for practical applications.
Resumo:
The aim of this study is to evaluate the pulmonary function in subjects with diagnosis of Turner Syndrome, in charge at the Syndromology Ward of the Paediatric Clinic of S.Orsola-Malpighi hospital. There are very few datas about lung function in patients with Turner syndrome’s genotype and phenotype in medical literature. Since the thorax of these subjects have peculiar anatomic shape (as “shield” or “overturned triangle”), we presupposed that these subjects could have also a peculiar respiratory function. Moreover we look for the possibility of correlation between pulmonary function and estroprogestinic replacement therapy and/or growth hormone (GH) replacement therapy. Material and methods: we studied 48 patients, with diagnosis of Turner Syndrome; they all made spirometry voluntarily and, when capable, also plethismografy. Results: - the parametres of pulmonary function are a little higher of the predicted values for age and sex but they are a little lower if they're corrected for each patient’s ideal high and weight: so we can conclude that in Turner Syndrme subjects pulmonary function is normal; -there’s not a statistically significant correlation between pulmonary function and GH therapy; -there’s not a statistically significant correlation between GH therapy’s length and pulmonary function except for Total Lung Capacity which increases with the number of years of GH therapy; - there’s not a statistically significant correlation between pulmonary function and estroprogestinic replacement herapy.
Resumo:
In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.
Resumo:
Atmosphärische Neutrinos erlauben es Prinzipien der Relativitätstheorie, wie die Lorentz-Invarianz und das schwache Äquivalenzprinzip, zu überprüfen. Kleine Abweichungen von diesen Prinzipien können in einigen Theorien zu messbaren Neutrinooszillationen führen. In dieser Arbeit wird in den aufgezeichneten Neutrinoereignissen des AMANDA-Detektors nach solchen alternativen Oszillationseffekten gesucht. Das Neutrinoteleskop AMANDA befindet sich am geographischen Südpol und ist in einer Tiefe zwischen 1500 m und 2000 m im antarktischen Eispanzer eingebettet. AMANDA weist Myonneutrinos über das Tscherenkow-Licht neutrinoinduzierter Myonen nach, woraus die Richtung der Bahn des ursprünglichen Neutrinos rekonstruiert werden kann. Aus den AMANDA-Daten der Jahre 2000 bis 2003 wurden aus circa sieben Milliarden aufgezeichneten Ereignissen, die sich hauptsächlich aus dem Untergrund aus atmosphärischen Myonen zusammensetzen, 3401 Ereignisse neutrinoinduzierter Myonen selektiert. Dieser Datensatz wurde auf alternative Oszillationseffekte untersucht. Es wurden keine Hinweise auf solche Effekte gefunden. Für maximale Mischungswinkel konnte die untere Grenze für Oszillationsparameter, welche die Lorentz-Invarianz oder das Äquivalenzprinzip verletzen, auf DeltaBeta (2PhiDeltaGamma) < 5,15*10e-27 festgelegt werden.
Resumo:
Ocean Island Basalts (OIB) provide important information on the chemical and physical characteristics of their mantle sources. However, the geochemical composition of a generated magma is significantly affected by partial melting and/or subsequent fractional crystallization processes. In addition, the isotopic composition of an ascending magma may be modified during transport through the oceanic crust. The influence of these different processes on the chemical and isotopic composition of OIB from two different localities, Hawaii and Tubuai in the Pacific Ocean, are investigated here. In a first chapter, the Os-isotope variations in suites of lavas from Kohala Volcano, Hawaii, are examined to constrain the role of melt/crust interactions on the evolution of these lavas. As 187Os/188Os sensitivity to any radiogenic contaminant strongly depend on the Os content in the melt, Os and other PGE variations are investigated first. This study reveals that Os and other PGE behavior change during the Hawaiian magma differentiation. While PGE concentrations are relatively constant in lavas with relatively primitive compositions, all PGE contents strongly decrease in the melt as it evolved through ~ 8% MgO. This likely reflects the sulfur saturation of the Hawaiian magma and the onset of sulfide fractionation at around 8% MgO. Kohala tholeiites with more than 8% MgO and rich in Os have homogeneous 187Os/188Os values likely to represent the mantle signature of Kohala lavas. However, Os isotopic ratios become more radiogenic with decreasing MgO and Os contents in the lavas, which reflects assimilation of local crust material during fractional crystallization processes. Less than 8% upper oceanic crust assimilation could have produced the most radiogenic Os-isotope ratios recorded in the shield lavas. However, these small amounts of upper crust assimilation have only negligible effects on Sr and Nd isotopic ratios and therefore, are not responsible for the Sr and Nd isotopic heterogeneities observed in Kohala lavas. In a second chapter, fractional crystallization and partial melting processes are constrained using major and trace element variations in the same suites of lavas from Kohala Volcano, Hawaii. This inverse modeling approach allows the estimation of most of the trace element composition of the Hawaiian mantle source. The calculated initial trace element pattern shows slight depletion of the concentrations from LREE to the most incompatible elements, which indicates that the incompatible element enrichments described by the Hawaiian melt patterns are entirely produced by partial melting processes. The “Kea trend” signature of lavas from Kohala Volcano is also confirmed, with Kohala lavas having lower Sr/Nd and La/Th ratios than lavas from Mauna Loa Volcano. Finally, the magmatic evolution of Tubuai Island is investigated in a last chapter using the trace element and Sr, Nd, Hf isotopic variations in mafic lava suites. The Sr, Nd and Hf isotopic data are homogeneous and typical for the HIMU-type OIB and confirms the cogenetic nature of the different mafic lavas from Tubuai Island. The trace element patterns show progressive enrichment of incompatible trace elements with increasing alkali content in the lavas, which reflect progressive decrease in the degree of partial melting towards the later volcanic events. In addition, this enrichment of incompatible trace elements is associated with relative depletion of Rb, Ba, K, Nb, Ta and Ti in the lavas, which require the presence of small amount of residual phlogopite and of a Ti-bearing phase (ilmenite or rutile) during formation of the younger analcitic and nephelinitic magmas.
Resumo:
Aim: Previous studies revealed that diversification events in the western clade of the alpine Primula sect. Auricula were concentrated in the Quaternary cold periods. This implies that allopatric speciation in isolated glacial refugia was the most common mode of speciation. In the first part of the present dissertation, this hypothesis is further investigated by locating refugial areas of two sister species, Primula marginata & P. latifolia during the last glacial maximum, 21,000 years ago. In the second part, the glacial and postglacial history of P. hirsuta and P. daonensis is investigated. Location: European Alps. Methods: Glacial refugia were located using species distribution models, which are projected to last glacial maximum climate. These refugia are validated with geographic distribution patterns of intra-specific genetic diversity, rarity and variation. Results 1) Speciation: Glacial refugia of the sister taxa Primula marginata and P. latifolia were largely separated, only a small overlapping zone at the southern margin of the former glacier in the Maritime Alps exists. This overlapping zone is too small to indicate sympatric speciation. The largely separated glacial distribution of both species rather confirms our hypothesis of allopatric speciation in isolated glacial refugia. Results 2) Glacial and postglacial history: Surprizingly, the modelled potential refugia of three out of four Primula species are situated within the former ice-shield, except for P. marginata. This indicates that peripheral and central nunataks played an important role for the glacial survival in P. latifolia, P. hirsuta and P. daonensis, while peripheral refugia outside the maximum extend of the glacier were crucial in P. marginata. In P. hirsuta and P. latifolia SDMs allowed to exclude several hypothetical refugial areas that overlap with today’s distribution as potential refugia for the species. In P. marginata, hypothetical refugial areas at the periphery of the former ice-shield that overlap with today’s distribution were confirmed by the models. The results from the SDMs are confirmed by population genetic patterns in three out of four species. P. daonensis represents an exception, where population genetic data contradict the SDMs. Main conclusions: Species distribution models provide species specific scenarios of glacial distribution and postglacial re-colonization, which can be validated using population genetic analyses. This combined approach is useful and helps to understand the complex processes that have lead to the genetic and floristic patterns of biodiversity that is found today in the Alps.
Resumo:
Innerhalb des Untersuchungsgebiets Schleswig-Holstein wurden 39.712 topographische Hohlformen detektiert. Genutzt wurden dazu ESRI ArcMap 9.3 und 10.0. Der Datenaufbereitung folgten weitere Kalkulationen in MATLAB R2010b. Jedes Objekt wurde räumlich mit seinen individuellen Eigenschaften verschnitten. Dazu gehörten Fläche, Umfang, Koordinaten (Zentroide), Tiefe und maximale Tiefe der Hohlform und Formfaktoren wie Rundheit, Konvexität und Elongation. Ziel der vorgestellten Methoden war die Beantwortung von drei Fragestellungen: Sind negative Landformen dazu geeignet Landschaftseinheiten und Eisvorstöße zu unterscheiden und zu bestimmen? Existiert eine Kopplung von Depressionen an der rezenten Topographie zu geologischen Tiefenstrukturen? Können Senken unterschiedlicher Entstehung anhand ihrer Formcharakteristik unterteilt werden? Die vorgenommene Klassifikation der großen Landschaftseinheiten basiert auf der Annahme, dass sowohl Jungmoränengebiete, ihre Vorflächen als auch Altmoränengebiete durch charakteristische, abflusslose Hohlformen, wie Toteislöcher, Seen, etc. abgegrenzt werden können. Normalerweise sind solche Depressionen in der Natur eher selten, werden jedoch für ehemalige Glaziallandschaften als typisch erachtet. Ziel war es, die geologischen Haupteinheiten, Eisvorstöße und Moränengebiete der letzten Vereisungen zu differenzieren. Zur Bearbeitung wurde ein Detektionsnetz verwendet, das auf quadratischen Zellen beruht. Die Ergebnisse zeigen, dass durch die alleinige Nutzung von Depressionen zur Klassifizierung von Landschaftseinheiten Gesamtgenauigkeiten von bis zu 71,4% erreicht werden können. Das bedeutet, dass drei von vier Detektionszellen korrekt zugeordnet werden können. Jungmoränen, Altmoränen, periglazialeVorflächen und holozäne Bereiche können mit Hilfe der Hohlformen mit großer Sicherheit voneinander unterschieden und korrekt zugeordnet werden. Dies zeigt, dass für die jeweiligen Einheiten tatsächlich bestimmte Senkenformen typisch sind. Die im ersten Schritt detektierten Senken wurden räumlich mit weiterreichenden geologischen Informationen verschnitten, um zu untersuchen, inwieweit natürliche Depressionen nur glazial entstanden sind oder ob ihre Ausprägung auch mit tiefengeologischen Strukturen in Zusammenhang steht. 25.349 (63,88%) aller Senken sind kleiner als 10.000 m² und liegen in Jungmoränengebieten und können vermutlich auf glaziale und periglaziale Einflüsse zurückgeführt werden. 2.424 Depressionen liegen innerhalb der Gebiete subglazialer Rinnen. 1.529 detektierte Hohlformen liegen innerhalb von Subsidenzgebieten, von denen 1.033 innerhalb der Marschländer im Westen verortet sind. 919 große Strukturen über 1 km Größe entlang der Nordsee sind unter anderem besonders gut mit Kompaktionsbereichen elsterzeitlicher Rinnen zu homologisieren.344 dieser Hohlformen sind zudem mit Tunneltälern im Untergrund assoziiert. Diese Parallelität von Depressionen und den teils über 100 m tiefen Tunneltälern kann auf Sedimentkompaktion zurückgeführt werden. Ein Zusammenhang mit der Zersetzung postglazialen, organischen Materials ist ebenfalls denkbar. Darüber hinaus wurden in einer Distanz von 10 km um die miozän aktiven Flanken des Glückstadt-Grabens negative Landformen detektiert, die Verbindungen zu oberflächennahen Störungsstrukturen zeigen. Dies ist ein Anzeichen für Grabenaktivität während und gegen Ende der Vereisung und während des Holozäns. Viele dieser störungsbezogenen Senken sind auch mit Tunneltälern assoziiert. Entsprechend werden drei zusammenspielende Prozesse identifiziert, die mit der Entstehung der Hohlformen in Verbindung gebracht werden können. Eine mögliche Interpretation ist, dass die östliche Flanke des Glückstadt-Grabens auf die Auflast des elsterzeitlichen Eisschilds reagierte, während sich subglazial zeitgleich Entwässerungsrinnen entlang der Schwächezonen ausbildeten. Diese wurden in den Warmzeiten größtenteils durch Torf und unverfestigte Sedimente verfüllt. Die Gletschervorstöße der späten Weichselzeit aktivierten erneut die Flanken und zusätzlich wurde das Lockermaterial exariert, wodurch große Seen, wie z. B. der Große Plöner See entstanden sind. Insgesamt konnten 29 große Depressionen größer oder gleich 5 km in Schleswig-Holstein identifiziert werden, die zumindest teilweise mit Beckensubsidenz und Aktivität der Grabenflanken verbunden sind, bzw. sogar auf diese zurückgehen.Die letzte Teilstudie befasste sich mit der Differenzierung von Senken nach deren potentieller Genese sowie der Unterscheidung natürlicher von künstlichen Hohlformen. Dazu wurde ein DEM für einen Bereich im Norden Niedersachsens verwendet, das eine Gesamtgröße von 252 km² abdeckt. Die Ergebnisse zeigen, dass glazial entstandene Depressionen gute Rundheitswerte aufweisen und auch Elongation und Exzentrizität eher kompakte Formen anzeigen. Lineare negative Strukturen sind oft Flüsse oder Altarme. Sie können als holozäne Strukturen identifiziert werden. Im Gegensatz zu den potentiell natürlichen Senkenformen sind künstlich geschaffene Depressionen eher eckig oder ungleichmäßig und tendieren meist nicht zu kompakten Formen. Drei Hauptklassen topographischer Depressionen konnten identifiziert und voneinander abgegrenzt werden: Potentiell glaziale Senken (Toteisformen), Flüsse, Seiten- und Altarme sowie künstliche Senken. Die Methode der Senkenklassifikation nach Formparametern ist ein sinnvolles Instrument, um verschiedene Typen unterscheiden zu können und um bei geologischen Fragestellungen künstliche Senken bereits vor der Verarbeitung auszuschließen. Jedoch zeigte sich, dass die Ergebnisse im Wesentlichen von der Auflösung des entsprechenden Höhenmodells abhängen.