947 resultados para Compressed pile
Resumo:
[EN]The effectiveness and accuracy of the superposition method in assessing the dynamic stiffness and damping functions of embedded footings supported by vertical piles in homogeneous viscoelastic soil is addressed. To the end, the impedances of piled embedded footings are compared to those obtained by suporposing the impedance functions of the corresponding pile groups and embedded footing treated separately.
Resumo:
[EN]Different phenomena such a soil consolidation, erosion, and scour beneath an embedded footing supported on piles may lead to loss of contact between soil and the pile cap underside. The importance of this separation on the dynamic stiffness and damping of the foundation is assessed in this work.
Resumo:
[EN]In this work, stiffness and damping functions of pile foundations with inclined end-bearing piles have been computed for square 2X2 and 3X3 pile groups embedded in a soft stratum overlaying a rigid bedrock. The paper algo invetigates the influence that the assumption of a perfectly rigid bedrock and fixed boundary conditions at the pile tips have on the impedance functions.
Resumo:
Salt deposits characterize the subsurface of Tuzla (BiH) and made it famous since the ancient times. Archeological discoveries demonstrate the presence of a Neolithic pile-dwelling settlement related to the existence of saltwater springs that contributed to make the most of the area a swampy ground. Since the Roman times, the town is reported as “the City of Salt deposits and Springs”; "tuz" is the Turkish word for salt, as the Ottomans renamed the settlement in the 15th century following their conquest of the medieval Bosnia (Donia and Fine, 1994). Natural brine springs were located everywhere and salt has been evaporated by means of hot charcoals since pre-Roman times. The ancient use of salt was just a small exploitation compared to the massive salt production carried out during the 20th century by means of classical mine methodologies and especially wild brine pumping. In the past salt extraction was practised tapping natural brine springs, while the modern technique consists in about 100 boreholes with pumps tapped to the natural underground brine runs, at an average depth of 400-500 m. The mining operation changed the hydrogeological conditions enabling the downward flow of fresh water causing additional salt dissolution. This process induced severe ground subsidence during the last 60 years reaching up to 10 meters of sinking in the most affected area. Stress and strain of the overlying rocks induced the formation of numerous fractures over a conspicuous area (3 Km2). Consequently serious damages occurred to buildings and infrastructures such as water supply system, sewage networks and power lines. Downtown urban life was compromised by the destruction of more than 2000 buildings that collapsed or needed to be demolished causing the resettlement of about 15000 inhabitants (Tatić, 1979). Recently salt extraction activities have been strongly reduced, but the underground water system is returning to his natural conditions, threatening the flooding of the most collapsed area. During the last 60 years local government developed a monitoring system of the phenomenon, collecting several data about geodetic measurements, amount of brine pumped, piezometry, lithostratigraphy, extension of the salt body and geotechnical parameters. A database was created within a scientific cooperation between the municipality of Tuzla and the city of Rotterdam (D.O.O. Mining Institute Tuzla, 2000). The scientific investigation presented in this dissertation has been financially supported by a cooperation project between the Municipality of Tuzla, The University of Bologna (CIRSA) and the Province of Ravenna. The University of Tuzla (RGGF) gave an important scientific support in particular about the geological and hydrogeological features. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas (Gutierrez et al., 2008). The subject of this study is the collapsing phenomenon occurring in Tuzla area with the aim to identify and quantify the several factors involved in the system and their correlations. Tuzla subsidence phenomenon can be defined as geohazard, which represents the consequence of an adverse combination of geological processes and ground conditions precipitated by human activity with the potential to cause harm (Rosenbaum and Culshaw, 2003). Where an hazard induces a risk to a vulnerable element, a risk management process is required. The single factors involved in the subsidence of Tuzla can be considered as hazards. The final objective of this dissertation represents a preliminary risk assessment procedure and guidelines, developed in order to quantify the buildings vulnerability in relation to the overall geohazard that affect the town. The historical available database, never fully processed, have been analyzed by means of geographic information systems and mathematical interpolators (PART I). Modern geomatic applications have been implemented to deeply investigate the most relevant hazards (PART II). In order to monitor and quantify the actual subsidence rates, geodetic GPS technologies have been implemented and 4 survey campaigns have been carried out once a year. Subsidence related fractures system has been identified by means of field surveys and mathematical interpretations of the sinking surface, called curvature analysis. The comparison of mapped and predicted fractures leaded to a better comprehension of the problem. Results confirmed the reliability of fractures identification using curvature analysis applied to sinking data instead of topographic or seismic data. Urban changes evolution has been reconstructed analyzing topographic maps and satellite imageries, identifying the most damaged areas. This part of the investigation was very important for the quantification of buildings vulnerability.
Resumo:
Im Vordergrund der vorliegenden Arbeit stand die Synthese konjugierter Oligomere und Polymere vom Phenylenvinylen-Typ, die Elektronenakzeptorsubstituenten tragen, sowie die Darstellung von Oligo(phenylenvinylen)en mit reaktiven Alkoxysilylgruppen, die durch Hydrolyse und Polykondensation zu amorphen und filmbildenden Materialien mit definierten Chromophoren umgewandelt werden können.Der Aufbau von Oligo(phenylenvinylen)en (OPVs) und Poly(phenylenvinylen)en (PPVs) mit Elektronenakzeptoren an den aromatischen Kernen wurde über die Heck-Reaktion substituierter Divinylaromaten mit Dibromaromaten durchgeführt. Dazu wurde eine einfache Synthese von Divinylaromaten mit Elektronenakzeptor-substituenten über die zweifache Vinylierung der 1,4-Dibromaromaten mit Ethen bei erhöhtem Druck entwickelt.OPVs haben sich als Emitter in lichtemittierenden Dioden (LEDs) bewährt, ein zentrales Problem bei der Verwendung wohldefinierter niedermolekularer Verbindungen ist deren Kristallisationstendenz. Eine hier angewendete Strategie zur Unterdrückung der Rekristallisation beinhaltet die Verknüpfung stilbenoider Chromophore über ein gemeinsames Silizium-Atom, zu dreidimensionalen Verbindungen. Alternativ können durch die Verknüpfung definierter Chromophore mit Alkoxysilanen Monomere erzeugt werden, die für den Aufbau von Kammpolymeren mit Polysiloxanhauptkette oder von Siloxan-Netzwerken genutzt werden können, um amorphe und filmbildende Materialien aufzubauen. Die Darstellung der Tetrakis-OPV-silane wurde über Horner-Olefinierungen stilbenoider Aldehyde mit einem tetraedrischen Phosphonester mit Si-Zentralatom durchgeführt. Die Verknüpfung stilbenoider Chromophore mit Alkoxysilanen zu polykondensierbaren Monomeren erfolgte über Heck-Reaktion oder gekreuzte Metathese Reaktionen. Eine Verknüpfung über flexible Spacer wird durch Kondensation der Oligostyrylbenzaldehyde mit Aminopropylethoxysilanen zu Schiffschen Basen und deren Reduktion mit Cyanoborhydrid zu sekundären Aminen erzeugt. Die Chromophore, OPVs oder Diaryloxadiazole, mit Kieselsäureestergruppen lassen sich durch saure Hydrolyse und Kondensation zu gut löslichen, fluoreszierenden Oligomeren umwandeln, die entweder ringöffnend polymerisierbar oder zu unlöslichen Filmen vernetzbar sind.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Die vorliegende Arbeit beschäftigt sich vorwiegend mit Detektionsproblemen, die bei Experimenten zur Chemie der Transactiniden mit dem schnellen Flüssig-Flüssig-Extraktionssystem SISAK auftraten. Bei diesen Experimenten wird als Detektionsmethode die Flüssigszintillationsspektroskopie (LSC) eingesetzt. Es werden Szintillationspulse registriert, die für das verursachende Teilchen charakteristische Formen zeigen, die unterschieden werden müssen. Am Beispiel der Auswertung des SISAK-Experimentes zur Chemie des Rutherfordiums vom November 1998 wurde gezeigt, dass es mit den herkömmlichen Verfahren zur Pulsformdiskriminierung nicht möglich ist, die aus dem Zerfall der Transactiniden stammenden alpha-Ereignisse herauszufiltern. Ursache dafür ist ein hoher Untergrund, der in erster Linie von beta/gamma-Teilchen, Spaltfragmenten und pile ups verursacht wird. Durch die Verfügbarkeit von Transientenrecordern ergeben sich neue Möglichkeiten für eine digitale Pulsformdiskriminierung. In dieser Arbeit wird erstmals die Methode der digitalen Pulsformdiskriminierung mit künstlichen neuronalen Netzen (PSD-NN) vorgestellt. Es wurde im Zuge der Auswertung des SISAK-Experimentes vom Februar 2000 gezeigt, dass neuronale Netze in der Lage sind, Pulsformen automatisch richtig zu klassifizieren. Es ergeben sich nahezu untergrundfreie alpha-Flüssigszintillationsspektren. Es werden Vor- und Nachteile der neuen Methode diskutiert. Es ist dadurch möglich geworden, in SISAK-Experimenten Transactinidenatome anhand ihres Zerfalls eindeutig zu charakterisieren. Das SISAK-System kann somit bei Experimenten zum Studium des chemischen Verhaltens von Transactiniden in flüssiger Phase eingesetzt werden.____
Resumo:
Der Bedarf an hyperpolarisiertem 3He in Medizin und physikalischer Grundlagenforschung ist in den letzten ca. 10-15 Jahren sowohl in Bezug auf die zu Verfügung stehende Menge, als auch auf den benötigten Grad der Kernspinpolarisation stetig gestiegen. Gleichzeitig mußten Lösungen für die polarisationserhaltende Speicherung und den Transport gefunden werden, die je nach Anwendung anzupassen waren. Als Ergebnis kann mit dieser Arbeit ein in sich geschlossenes Gesamtkonzept vorgestellt werden, daß sowohl die entsprechenden Mengen für klinische Anwendungen, als auch höchste Polarisation für physikalische Grundlagenfor-schung zur Verfügung stellen kann. Verschiedene unabhängige Polarimetriemethoden zeigten in sich konsistente Ergebnisse und konnten, neben ihrer eigenen Weiterentwicklung, zu einer verläßlichen Charakterisierung des neuen Systems und auch der Transportzellen und –boxen eingesetzt werden. Die Polarisation wird mittels „Metastabilem Optischen Pumpen“ bei einem Druck von 1 mbar erzeugt. Dabei werden ohne Gasfluß Werte von P = 84% erreicht. Im Flußbetrieb sinkt die erreichbare Polarisation auf P ≈ 77%. Das 3He kann dann weitgehend ohne Polarisationsver-luste auf mehrere bar komprimiert und zu den jeweiligen Experimenten transportiert werden. Durch konsequente Weiterentwicklung der vorgestellten Polarisationseinheit an fast allen Komponenten kann somit jetzt bei einem Fluß von 0,8 barl/h eine Polarisation von Pmax = 77% am Auslaß der Apparatur erreicht werden. Diese skaliert linear mit dem Fluß, sodaß bei 3 barl/h die Polarisation immer noch bei ca. 60% liegt. Dabei waren die im Rahmen dieser Arbeit durchgeführten Verbesserungen an den Lasern, der Optik, der Kompressionseinheit, dem Zwischenspeicher und der Gasreinigung wesentlich für das Erreichen dieser Polarisatio-nen. Neben dem Einsatz eines neuen Faserlasersystems ist die hohe Gasreinheit und die lang-lebige Kompressionseinheit ein Schlüssel für diese Leistungsfähigkeit. Seit Herbst 2001 er-zeugte das System bereits über 2000 barl hochpolarisiertes 3He und ermöglichte damit zahl-reiche interdisziplinäre Experimente und Untersuchungen. Durch Verbesserungen an als Prototypen bereits vorhandenen Transportboxen und durch weitgehende Unterdrückung der Wandrelaxation in den Transportgefäßen aufgrund neuer Erkenntnisse über deren Ursachen stellen auch polarisationserhaltende Transporte über große Strecken kein Problem mehr dar. In unbeschichteten 1 Liter Kolben aus Aluminosilikatglä-sern werden nun problemlos Speicherzeiten von T1 > 200h erreicht. Im Rahmen des europäi-schen Forschungsprojektes „Polarized Helium to Image the Lung“ wurden während 19 Liefe-rungen 70barl 3He nach Sheffield (UK) und bei 13 Transporten 100 barl nach Kopenhagen (DK) per Flugzeug transportiert. Zusammenfassend konnte gezeigt werden, daß die Problematik der Kernspinpolarisationser-zeugung von 3He, die Speicherung, der Transport und die Verwendung des polarisierten Ga-ses in klinischer Diagnostik und physikalischen Grundlagenexperimenten weitgehend gelöst ist und das Gesamtkonzept die Voraussetzungen für allgemeine Anwendungen auf diesen Gebieten geschaffen hat.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
In the present study, pterosaur skull constructions were analysed using a combined approach of finite element analysis (FEA), static investigations as well as applying classical beam theory and lever mechanics. The study concentrates on the operating regime „bite“, where loads are distributed via the dentition or a keratinous rhamphotheca into the skull during jaw occlusion. As a first step, pterosaur tooth constructions were analysed. The different morphologies of the tooth construction determine specific operational ranges, in which the teeth perform best (= greatest resistance against failure). The incomplete enamel-covering of the pterosaur tooth constructions thereby leads to a reduction of strain and stress and to a greater lateral elasticity than for a complete enamel cover. This permits the development of high and lateral compressed tooth constructions. Further stress-absorption occurs in the periodontal membrane, although its mechanical properties can not be clarified unambiguously. A three-dimensionally preserved skull of Anhanguera was chosen as a case-study for the investigation of the skull constructions. CT-scans were made to get information about the internal architecture, supplemented by thin-sections of a rostrum of a second Anhanguera specimen. These showed that the rostrum can be approximated as a double-walled triangular tube with a large central vacuity and an average wall-thickness of the bony layers of about 1 mm. On base of the CT-scans, a stereolithography of the skull of Anhanguera was made on which the jaw adductor and abductor muscles were modelled, permitting to determine muscular forces. The values were used for the lever mechanics, cantilever and space frame analysis. These studies and the FEA show, that the jaw reaction forces are critical for the stability of the skull construction. The large jugal area ventral to the orbita and the inclined occipital region act as buttresses against these loads. In contrast to the orbitotemporal region which is subject to varying loading conditions, the pattern in the rostrum is less complex. Here, mainly bending in dorsal direction and torsion occur. The hollow rostrum leads to a reduction of weight of the skull and to a high bending and torsional resistance. Similar to the Anhanguera skull construction, the skulls of those pterosaur taxa were analysed, from which enough skull material is know to permit a reliable reconstruction. Furthermore, FEA were made from five selected taxa. The comparison of the biomechanical behaviour of the different skull constructions results in major transformational processes: elongation of rostra, inclination of the occipital region, variation of tooth morphology, reduction of the dentition and replacement of teeth by a keratinous hook or rhamphotheca, fusion of naris and antorbital fenestra, and the development of bony and soft-tissue crests. These processes are discussed for their biomechanical effects during bite. Certain optional operational ranges for feeding are assigned to the different skull constructions and previous hypotheses (e.g. skimming) are verified. Using the principle of economisation, these processes help to establish irreversible transformations and to define possible evolutionary pathways. The resulting constructional levels and the structural variations within these levels are interpreted in light of a greater feeding efficiency and reduction of bony mass combined with an increased stability against the various loads. The biomechanical conclusive pathways are used for comparison and verification of recent hypothesis of the phylogenetic systematics of pterosaurs.
Resumo:
The purpose of this thesis is to investigate the strength and structure of the magnetized medium surrounding radio galaxies via observations of the Faraday effect. This study is based on an analysis of the polarization properties of radio galaxies selected to have a range of morphologies (elongated tails, or lobes with small axial ratios) and to be located in a variety of environments (from rich cluster core to small group). The targets include famous objects like M84 and M87. A key aspect of this work is the combination of accurate radio imaging with high-quality X-ray data for the gas surrounding the sources. Although the focus of this thesis is primarily observational, I developed analytical models and performed two- and three-dimensional numerical simulations of magnetic fields. The steps of the thesis are: (a) to analyze new and archival observations of Faraday rotation measure (RM) across radio galaxies and (b) to interpret these and existing RM images using sophisticated two and three-dimensional Monte Carlo simulations. The approach has been to select a few bright, very extended and highly polarized radio galaxies. This is essential to have high signal-to-noise in polarization over large enough areas to allow computation of spatial statistics such as the structure function (and hence the power spectrum) of rotation measure, which requires a large number of independent measurements. New and archival Very Large Array observations of the target sources have been analyzed in combination with high-quality X-ray data from the Chandra, XMM-Newton and ROSAT satellites. The work has been carried out by making use of: 1) Analytical predictions of the RM structure functions to quantify the RM statistics and to constrain the power spectra of the RM and magnetic field. 2) Two-dimensional Monte Carlo simulations to address the effect of an incomplete sampling of RM distribution and so to determine errors for the power spectra. 3) Methods to combine measurements of RM and depolarization in order to constrain the magnetic-field power spectrum on small scales. 4) Three-dimensional models of the group/cluster environments, including different magnetic field power spectra and gas density distributions. This thesis has shown that the magnetized medium surrounding radio galaxies appears more complicated than was apparent from earlier work. Three distinct types of magnetic-field structure are identified: an isotropic component with large-scale fluctuations, plausibly associated with the intergalactic medium not affected by the presence of a radio source; a well-ordered field draped around the front ends of the radio lobes and a field with small-scale fluctuations in rims of compressed gas surrounding the inner lobes, perhaps associated with a mixing layer.
Resumo:
The focus of this thesis was the in-situ application of the new analytical technique "GCxGC" in both the marine and continental boundary layer, as well as in the free troposphere. Biogenic and anthropogenic VOCs were analysed and used to characterise local chemistry at the individual measurement sites. The first part of the thesis work was the characterisation of a new set of columns that was to be used later in the field. To simplify the identification, a time-of-flight mass spectrometer (TOF-MS) detector was coupled to the GCxGC. In the field the TOF-MS was substituted by a more robust and tractable flame ionisation detector (FID), which is more suitable for quantitative measurements. During the process, a variety of volatile organic compounds could be assigned to different environmental sources, e.g. plankton sources, eucalyptus forest or urban centers. In-situ measurements of biogenic and anthropogenic VOCs were conducted at the Meteorological Observatory Hohenpeissenberg (MOHP), Germany, applying a thermodesorption-GCxGC-FID system. The measured VOCs were compared to GC-MS measurements routinely conducted at the MOHP as well as to PTR-MS measurements. Furthermore, a compressed ambient air standard was measured from three different gas chromatographic instruments and the results were compared. With few exceptions, the in-situ, as well as the standard measurements, revealed good agreement between the individual instruments. Diurnal cycles were observed, with differing patterns for the biogenic and the anthropogenic compounds. The variability-lifetime relationship of compounds with atmospheric lifetimes from a few hours to a few days in presence of O3 and OH was examined. It revealed a weak but significant influence of chemistry on these short-lived VOCs at the site. The relationship was also used to estimate the average OH radical concentration during the campaign, which was compared to in-situ OH measurements (1.7 x 10^6 molecules/cm^3, 0.071 ppt) for the first time. The OH concentration ranging from 3.5 to 6.5 x 10^5 molecules/cm^3 (0.015 to 0.027 ppt) obtained with this method represents an approximation of the average OH concentration influencing the discussed VOCs from emission to measurement. Based on these findings, the average concentration of the nighttime NO3 radicals was estimated using the same approach and found to range from 2.2 to 5.0 x 10^8 molecules/cm^3 (9.2 to 21.0 ppt). During the MINATROC field campaign, in-situ ambient air measurements with the GCxGC-FID were conducted at Tenerife, Spain. Although the station is mainly situated in the free troposphere, local influences of anthropogenic and biogenic VOCs were observed. Due to a strong dust event originating from Western Africa it was possible to compare the mixing ratios during normal and elevated dust loading in the atmosphere. The mixing ratios during the dust event were found to be lower. However, this could not be attributed to heterogeneous reactions as there was a change in the wind direction from northwesterly to southeasterly during the dust event.
Resumo:
In this thesis, we investigated the evaporation of sessile microdroplets on different solid substrates. Three major aspects were studied: the influence of surface hydrophilicity and heterogeneity on the evaporation dynamics for an insoluble solid substrate, the influence of external process parameters and intrinsic material properties on microstructuring of soluble polymer substrates and the influence of an increased area to volume ratio in a microfluidic capillary, when evaporation is hindered. In the first part, the evaporation dynamics of pure sessile water drops on smooth self-assembled monolayers (SAMs) of thiols or disulfides on gold on mica was studied. With increasing surface hydrophilicity the drop stayed pinned longer. Thus, the total evaporation time of a given initial drop volume was shorter, since the drop surface, through which the evaporation occurs, stays longer large. Usually, for a single drop the volume decreased linearly with t1.5, t being the evaporation time, for a diffusion-controlled evaporation process. However, when we measured the total evaporation time, ttot, for multiple droplets with different initial volumes, V0, we found a scaling of the form V0 = attotb. The more hydrophilic the substrate was, the more showed the scaling exponent a tendency to an increased value up to 1.6. This can be attributed to an increasing evaporation rate through a thin water layer in the vicinity of the drop. Under the assumption of a constant temperature at the substrate surface a cooling of the droplet and thus a decreased evaporation rate could be excluded as a reason for the different scaling exponent by simulations performed by F. Schönfeld at the IMM, Mainz. In contrast, for a hairy surface, made of dialkyldisulfide SAMs with different chain lengths and a 1:1 mixture of hydrophilic and hydrophobic end groups (hydroxy versus methyl group), the scaling exponent was found to be ~ 1.4. It increased to ~ 1.5 with increasing hydrophilicity. A reason for this observation can only be speculated: in the case of longer hydrophobic alkyl chains the formation of an air layer between substrate and surface might be favorable. Thus, the heat transport to the substrate might be reduced, leading to a stronger cooling and thus decreased evaporation rate. In the second part, the microstructuring of polystyrene surfaces by drops of toluene, a good solvent, was investigated. For this a novel deposition technique was developed, with which the drop can be deposited with a syringe. The polymer substrate is lying on a motorized table, which picks up the pendant drop by an upward motion until a liquid bridge is formed. A consecutive downward motion of the table after a variable delay, i.e. the contact time between drop and polymer, leads to the deposition of the droplet, which can evaporate. The resulting microstructure is investigated in dependence of the processes parameters, i.e. the approach and the retraction speed of the substrate and the delay between them, and in dependence of the intrinsic material properties, i.e. the molar mass and the type of the polymer/solvent system. The principal equivalence with the microstructuring by the ink-jet technique was demonstrated. For a high approach and retraction speed of 9 mm/s and no delay between them, a concave microtopology was observed. In agreement with the literature, this can be explained by a flow of solvent and the dissolved polymer to the rim of the pinned droplet, where polymer is accumulated. This effect is analogue to the well-known formation of ring-like stains after the evaporation of coffee drops (coffee-stain effect). With decreasing retraction speed down to 10 µm/s the resulting surface topology changes from concave to convex. This can be explained with the increasing dissolution of polymer into the solvent drop prior to the evaporation. If the polymer concentration is high enough, gelation occurs instead of a flow to the rim and the shape of the convex droplet is received. With increasing delay time from below 0 ms to 1s the depth of the concave microwells decreases from 4.6 µm to 3.2 µm. However, a convex surface topology could not be obtained, since for longer delay times the polymer sticks to the tip of the syringe. Thus, by changing the delay time a fine-tuning of the concave structure is accomplished, while by changing the retraction speed a principal change of the microtopolgy can be achieved. We attribute this to an additional flow inside the liquid bridge, which enhanced polymer dissolution. Even if the pendant drop is evaporating about 30 µm above the polymer surface without any contact (non-contact mode), concave structures were observed. Rim heights as high as 33 µm could be generated for exposure times of 20 min. The concave structure exclusively lay above the flat polymer surface outside the structure even after drying. This shows that toluene is taken up permanently. The increasing rim height, rh, with increasing exposure time to the solvent vapor obeys a diffusion law of rh = rh0 tn, with n in the range of 0.46 ~ 0.65. This hints at a non-Fickian swelling process. A detailed analysis showed that the rim height of the concave structure is modulated, unlike for the drop deposition. This is due to the local stress relaxation, which was initiated by the increasing toluene concentration in the extruded polymer surface. By altering the intrinsic material parameters i.e. the polymer molar mass and the polymer/solvent combination, several types of microstructures could be formed. With increasing molar mass from 20.9 kDa to 1.44 MDa the resulting microstructure changed from convex, to a structure with a dimple in the center, to concave, to finally an irregular structure. This observation can be explained if one assumes that the microstructuring is dominated by two opposing effects, a decreasing solubility with increasing polymer molar mass, but an increasing surface tension gradient leading to instabilities of Marangoni-type. Thus, a polymer with a low molar mass close or below the entanglement limit is subject to a high dissolution rate, which leads to fast gelation compared to the evaporation rate. This way a coffee-rim like effect is eliminated early and a convex structure results. For high molar masses the low dissolution rate and the low polymer diffusion might lead to increased surface tension gradients and a typical local pile-up of polymer is found. For intermediate polymer masses around 200 kDa, the dissolution and evaporation rate are comparable and the typical concave microtopology is found. This interpretation was supported by a quantitative estimation of the diffusion coefficient and the evaporation rate. For a different polymer/solvent system, polyethylmethacrylate (PEMA)/ethylacetate (EA), exclusively concave structures were found. Following the statements above this can be interpreted with a lower dissolution rate. At low molar masses the concentration of PEMA in EA most likely never reaches the gelation point. Thus, a concave instead of a convex structure occurs. At the end of this section, the optically properties of such microstructures for a potential application as microlenses are studied with laser scanning confocal microscopy. In the third part, the droplet was confined into a glass microcapillary to avoid evaporation. Since here, due to an increased area to volume ratio, the surface properties of the liquid and the solid walls became important, the influence of the surface hydrophilicity of the wall on the interfacial tension between two immiscible liquid slugs was investigated. For this a novel method for measuring the interfacial tension between the two liquids within the capillary was developed. This technique was demonstrated by measuring the interfacial tensions between slugs of pure water and standard solvents. For toluene, n-hexane and chloroform 36.2, 50.9 and 34.2 mN/m were measured at 20°C, which is in a good agreement with data from the literature. For a slug of hexane in contact with a slug of pure water containing ethanol in a concentration range between 0 and 70 (v/v %), a difference of up to 6 mN/m was found, when compared to commercial ring tensiometry. This discrepancy is still under debate.
Resumo:
Ziel der vorliegenden Dissertation war die Untersuchung der Liefergebiete und Ablagerungsräume sedimentärer Gesteine aus ausgewählten Gebieten der inneren Helleniden Griechenlands. Die untersuchten Sedimente Nordgriechenlands gehören zu den Pirgadikia und Vertiskos Einheiten des Serbo-Makedonische Massifs, zu den Examili, Melissochori und Prinochori Formationen der östlichen Vardar Zone und zur Makri Einheit und Melia Formation des östlichen Zirkum-Rhodope-Gürtels in Thrakien. In der östlichen Ägäis lag der Schwerpunkt bei den Sedimenten der Insel Chios. Der Metamorphosegrad der untersuchten Gesteine variiert von der untersten Grünschieferfazies bis hin zur Amphibolitfazies. Das stratigraphische Alter reicht vom Ordovizium bis zur Kreide. Zur Charakterisierung der Gesteine und ihrer Liefgebiete wurden Haupt- und Spurenelementgehalte der Gesamtgesteine bestimmt, mineralchemische Analysen durchgeführt und detritische Zirkone mit U–Pb datiert. An ausgewählten Proben wurden außerdem biostratigraphische Untersuchungen zur Bestimmung des Sedimentationsalters durchgeführt. Die Untersuchungsergebnisse dieser Arbeit sind von großer Bedeutung für paläogeographische Rekonstruktionen der Tethys. Die wichtigsten Ergebnisse lassen sich wie folgt zusammenfassen: Die ältesten Sedimente Nordgriechenlands gehören zur Pirgadikia Einheit des Serbo-Makedonischen Massifs. Es sind sehr reife, quarzreiche, siliziklastische Metasedimente, die auf Grund ihrer Maturität und ihrer detritischen Zirkone mit ordovizischen overlap-Sequenzen vom Nordrand Gondwanas korreliert werden können. Die Metasedimente der Vertiskos Einheit besitzen ein ähnliches stratigraphisches Alter, haben aber einen anderen Ablagerungsraum. Das Altersspektrum detritischer Zirkone lässt auf ein Liefergebiet im Raum NW Afrikas (Hun Superterrane) schließen. Die Gesteinsassoziation der Vertiskos Einheit wird als Teil einer aktiven Kontinentalrandabfolge gesehen. Die ältesten biostratigraphisch datierten Sedimente Griechenlands sind silurische bis karbonische Olistolithe aus einer spätpaläozoischen Turbidit-Olistostrom Einheit auf der Insel Chios. Die Alter detritischer Zirkone und die Liefergebietsanalyse der fossilführenden Olistolithe lassen den Schluss zu, dass die klastischen Sedimente von Chios Material vom Sakarya Mikrokontinent in der West-Türkei und faziellen Äquivalenten zu paläozoischen Gesteinen der Istanbul Zone in der Nord-Türkei und der Balkan Region erhalten haben. Während der Permotrias wurde die Examili Formation der östlichen Vardar Zone in einem intrakontinentalen, sedimentären Becken, nahe der Vertiskos Einheit abgelagert. Untergeordnet wurde auch karbonisches Grundgebirgsmaterial eingetragen. Im frühen bis mittleren Jura wurde die Melissochori Formation der östlichen Vardar Zone am Abhang eines karbonatführenden Kontinentalrandes abgelagert. Der Großteil des detritischen Materials kam von permokarbonischem Grundgebirge vulkanischen Ursprungs, vermutlich von der Pelagonischen Zone und/oder der unteren tektonischen Einheit des Rhodope Massifs. Die Makri Einheit in Thrakien besitzt vermutlich ein ähnliches Alter wie die Melissochori Formation. Beide sedimentären Abfolgen ähneln sich sehr. Der Großteil des detritischen Materials für die Makri Einheit kam vom Grundgebirge der Pelagonischen Zone oder äquivalenten Gesteinen. Während der frühen Kreide wurde die Prinochori Formation der östlichen Vardar Zone im Vorfeld eines heterogenen Deckenstapels abgelagert, der ophiolitisches Material sowie Grundgebirge ähnlich zu dem der Vertiskos Einheit enthielt. Ebenfalls während der Kreidezeit wurde in Thrakien, vermutlich im Vorfeld eines metamorphen Deckenstapels mit Affinitäten zum Grundgebirge der Rhodopen die Melia Formation abgelagert. Zusammenfassend kann festgehalten werden, dass die Subduktion eines Teiles der Paläotethys und die anschließende Akkretion vom Nordrand Gondwanas stammender Mikrokontinente (Terranes) nahe dem südlichen aktiven Kontinentalrand Eurasiens den geodynamischen Rahmen für die Schüttung des detritischen Materials der Sedimente der inneren Helleniden im späten Paläozoikum bildeten. Die darauf folgenden frühmesozoischen Riftprozesse leiteten die Bildung von Ozeanbecken der Neotethys ein. Intraozeanische Subduktion und die Obduzierung von Ophioliten prägten die Zeit des Jura. Die spätjurassische und frühkretazische tektonische Phase wurde durch die Ablagerung von mittelkretazischen Kalksteinen besiegelt. Die endgültige Schließung von Ozeanbecken der Neotethys im Bereich der inneren Helleniden erfolgte schließlich in der späten Kreide und im Tertiär.
Resumo:
The aim of this work is to measure the stress inside a hard micro object under extreme compression. To measure the internal stress, we compressed ruby spheres (a-Al2O3: Cr3+, 150 µm diameter) between two sapphire plates. Ruby fluorescence spectrum shifts to longer wavelengths under compression and can be related to the internal stress by a conversion coefficient. A confocal laser scanning microscope was used to excite and collect fluorescence at desired local spots inside the ruby sphere with spatial resolution of about 1 µm3. Under static external loads, the stress distribution within the center plane of the ruby sphere was measured directly for the first time. The result agreed to Hertz’s law. The stress across the contact area showed a hemispherical profile. The measured contact radius was in accord with the calculation by Hertz’s equation. Stress-load curves showed spike-like decrease after entering non-elastic phase, indicating the formation and coalescence of microcracks, which led to relaxing of stress. In the vicinity of the contact area luminescence spectra with multiple peaks were observed. This indicated the presence of domains of different stress, which were mechanically decoupled. Repeated loading cycles were applied to study the fatigue of ruby at the contact region. Progressive fatigue was observed when the load exceeded 1 N. As long as the load did not exceed 2 N stress-load curves were still continuous and could be described by Hertz’s law with a reduced Young’s modulus. Once the load exceeded 2 N, periodical spike-like decreases of the stress could be observed, implying a “memory effect” under repeated loading cycles. Vibration loading with higher frequencies was applied by a piezo. Redistributions of intensity on the fluorescence spectra were observed and it was attributed to the repopulation of the micro domains of different elasticity. Two stages of under vibration loading were suggested. In the first stage continuous damage carried on until certain limit, by which the second stage, e.g. breakage, followed in a discontinuous manner.