11 resultados para Measurement and processing vibrations

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiometals play an important role in nuclear medicine as involved in diagnostic or therapeutic agents. In the present work the radiochemical aspects of production and processing of very promising radiometals of the third group of the periodic table, namely radiogallium and radiolanthanides are investigated. The 68Ge/68Ga generator (68Ge, T½ = 270.8 d) provides a cyclotron-independent source of positron-emitting 68Ga (T½ = 68 min), which can be used for coordinative labelling. However, for labelling of biomolecules via bifunctional chelators, particularly if legal aspects of production of radiopharmaceuticals are considered, 68Ga(III) as eluted initially needs to be pre-concentrated and purified. The first experimental chapter describes a system for simple and efficient handling of the 68Ge/68Ga generator eluates with a cation-exchange micro-chromatography column as the main component. Chemical purification and volume concentration of 68Ga(III) are carried out in hydrochloric acid – acetone media. Finally, generator produced 68Ga(III) is obtained with an excellent radiochemical and chemical purity in a minimised volume in a form applicable directly for the synthesis of 68Ga-labelled radiopharmaceuticals. For labelling with 68Ga(III), somatostatin analogue DOTA-octreotides (DOTATOC, DOTANOC) are used. 68Ga-DOTATOC and 68Ga-DOTANOC were successfully used to diagnose human somatostatin receptor-expressing tumours with PET/CT. Additionally, the proposed method was adapted for purification and medical utilisation of the cyclotron produced SPECT gallium radionuclide 67Ga(III). Second experimental chapter discusses a diagnostic radiolanthanide 140Nd, produced by irradiation of macro amounts of natural CeO2 and Pr2O3 in natCe(3He,xn)140Nd and 141Pr(p,2n)140Nd nuclear reactions, respectively. With this produced and processed 140Nd an efficient 140Nd/140Pr radionuclide generator system has been developed and evaluated. The principle of radiochemical separation of the mother and daughter radiolanthanides is based on physical-chemical transitions (hot-atom effects) of 140Pr following the electron capture process of 140Nd. The mother radionuclide 140Nd(III) is quantitatively absorbed on a solid phase matrix in the chemical form of 140Nd-DOTA-conjugated complexes, while daughter nuclide 140Pr is generated in an ionic species. With a very high elution yield and satisfactory chemical and radiolytical stability the system could able to provide the short-lived positron-emitting radiolanthanide 140Pr for PET investigations. In the third experimental chapter, analogously to physical-chemical transitions after the radioactive decay of 140Nd in 140Pr-DOTA, the rapture of the chemical bond between a radiolanthanide and the DOTA ligand, after the thermal neutron capture reaction (Szilard-Chalmers effect) was evaluated for production of the relevant radiolanthanides with high specific activity at TRIGA II Mainz nuclear reactor. The physical-chemical model was developed and first quantitative data are presented. As an example, 166Ho could be produced with a specific activity higher than its limiting value for TRIGA II Mainz, namely about 2 GBq/mg versus 0.9 GBq/mg. While free 166Ho(III) is produced in situ, it is not forming a 166Ho-DOTA complex and therefore can be separated from the inactive 165Ho-DOTA material. The analysis of the experimental data shows that radionuclides with half-life T½ < 64 h can be produced on TRIGA II Mainz nuclear reactor, with specific activity higher than any available at irradiation of simple targets e.g. oxides.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. Knowledge of the spatial and temporal distribution of CCN in the atmosphere is essential to understand and describe the effects of aerosols in meteorological models. In this study, CCN properties were measured in polluted and pristine air of different continental regions, and the results were parameterized for efficient prediction of CCN concentrations.The continuous-flow CCN counter used for size-resolved measurements of CCN efficiency spectra (activation curves) was calibrated with ammonium sulfate and sodium chloride aerosols for a wide range of water vapor supersaturations (S=0.068% to 1.27%). A comprehensive uncertainty analysis showed that the instrument calibration depends strongly on the applied particle generation techniques, Köhler model calculations, and water activity parameterizations (relative deviations in S up to 25%). Laboratory experiments and a comparison with other CCN instruments confirmed the high accuracy and precision of the calibration and measurement procedures developed and applied in this study.The mean CCN number concentrations (NCCN,S) observed in polluted mega-city air and biomass burning smoke (Beijing and Pearl River Delta, China) ranged from 1000 cm−3 at S=0.068% to 16 000 cm−3 at S=1.27%, which is about two orders of magnitude higher than in pristine air at remote continental sites (Swiss Alps, Amazonian rainforest). Effective average hygroscopicity parameters, κ, describing the influence of chemical composition on the CCN activity of aerosol particles were derived from the measurement data. They varied in the range of 0.3±0.2, were size-dependent, and could be parameterized as a function of organic and inorganic aerosol mass fraction. At low S (≤0.27%), substantial portions of externally mixed CCN-inactive particles with much lower hygroscopicity were observed in polluted air (fresh soot particles with κ≈0.01). Thus, the aerosol particle mixing state needs to be known for highly accurate predictions of NCCN,S. Nevertheless, the observed CCN number concentrations could be efficiently approximated using measured aerosol particle number size distributions and a simple κ-Köhler model with a single proxy for the effective average particle hygroscopicity. The relative deviations between observations and model predictions were on average less than 20% when a constant average value of κ=0.3 was used in conjunction with variable size distribution data. With a constant average size distribution, however, the deviations increased up to 100% and more. The measurement and model results demonstrate that the aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the measurement results and parameterizations presented in this study can be directly implemented in detailed process models as well as in large-scale atmospheric and climate models for efficient description of the CCN activity of atmospheric aerosols.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We identified syntaxin 5 (Stx5), a protein involved in intracellular vesicle trafficking, as a novel interaction partner of the very low density lipoprotein (VLDL)-receptor (VLDL-R), a member of the LDL-receptor family. In addition, we investigated the effect of Stx5 on VLDL-R maturation, trafficking and processing. Here, we demonstrated mutual association of both proteins using several in vitro approaches. Furthermore, we detected a special maturation phenotype of VLDL-R resulting from Stx5 overexpression. We found that Stx5 prevented Golgi-maturation of VLDL-R, but did not cause accumulation of the immature protein in ER to Golgi compartments, the main expression sites of Stx5. Rather more, abundantly present Stx5 was capable of translocating ER-/N-glycosylated VLDL-R to the plasma membrane, and thus was insensitive to BFA treatment and incubation at low temperature. Based on our findings, we postulate that Stx5 can directly bind to the C-terminal domain of VLDL-R, thereby influencing the receptor’s glycosylation, trafficking and processing characteristics. Resulting from that, we further suggest that Stx5, which is highly expressed in neurons along with VLDL-R, might play a role in modulating the receptor’s physiology by participating in a novel/undetermined alternative pathway bypassing the Golgi apparatus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mining and processing of metal ores are important causes of soil and groundwater contamination in many regions worldwide. Metal contaminations are a serious risk for the environment and human health. The assessment of metal contaminations in the soil is therefore an important task. A common approach to assess the environmental risk emanating from inorganic contaminations to soil and groundwater is the use of batch or column leaching tests. In this regard, the suitability of leaching tests is a controversial issue. In the first part of this work the applicability and comparability of common leaching tests in the scope of groundwater risk assessment of inorganic contamination is reviewed and critically discussed. Soil water sampling methods (the suction cup method and centrifugation) are addressed as an alternative to leaching tests. Reasons for limitations of the comparability of leaching test results are exposed and recommendations are given for the expedient application of leaching tests for groundwater risk assessment. Leaching tests are usually carried out in open contact with the atmosphere disregarding possible changes of redox conditions. This can affect the original metal speciation and distribution, particularly when anoxic samples are investigated. The influence of sample storage on leaching test results of sulfide bearing anoxic material from a former flotation dump is investigated in a long-term study. Since the oxidation of the sulfide-bearing samples leads to a significant overestimation of metal release, a feasible modification for the conduction of common leaching tests for anoxic material is proposed, where oxidation is prevented efficiently. A comparison of leaching test results to soil water analyzes have shown that the modified saturation soil extraction (SSE) is found to be the only of the tested leaching procedures, which can be recommended for the assessment of current soil water concentrations at anoxic sites if direct investigation of the soil water is impossible due to technical reasons. The vertical distribution and speciation of Zn and Pb in the flotation residues as well as metal concentrations in soil water and plants were investigated to evaluate the environmental risk arising from this site due to the release of metals. The variations in pH and inorganic C content show an acidification of the topsoil with pH values down to 5.5 in the soil and a soil water pH of 6 in 1 m depth. This is due to the oxidation of sulfides and depletion in carbonates. In the anoxic subsoil pH conditions are still neutral and soil water collected with suction cups is in equilibrium with carbonate minerals. Results from extended x-ray absorption fine-structure (EXAFS) spectroscopy confirm that Zn is mainly bound in sphalerite in the subsoil and weathering reactions lead to a redistribution of Zn in the topsoil. A loss of 35% Zn and S from the topsoil compared to the parent material with 10 g/kg Zn has been observed. 13% of total Zn in the topsoil can be regarded as mobile or easily mobilizable according to sequential chemical extractions (SCE). Zn concentrations of 10 mg/L were found in the soil water, where pH is acidic. Electron supply and the buffer capacity of the soil were identified as main factors controlling Zn mobility and release to the groundwater. Variable Pb concentrations up to 30 µg/L were observed in the soil water. In contrast to Zn, Pb is enriched in the mobile fraction of the oxidized topsoil by a factor of 2 compared to the subsoil with 2 g/kg Pb. 80% of the cation exchange capacity in the topsoil is occupied by Pb. Therefore, plant uptake and bioavailability are of major concern. If the site is not prevented from proceeding acidification in the future, a significant release of Zn, S, and Pb to the groundwater has to be expected. Results from this study show that the assessment of metal release especially from sulfide bearing anoxic material requires an extensive comprehension of leaching mechanisms on the one hand and on weathering processes, which influence the speciation and the mobility of metals, on the other hand. Processes, which may change redox and pH conditions in the future, have to be addressed to enable sound decisions for soil and groundwater protection and remediation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most precisely measured quantities in particle physics is the magnetic moment of the muon, which describes its coupling to an external magnetic field. It is expressed in form of the anomalous magnetic moment of the muon a_mu=(g_mu-2)/2 and has been determined experimentally with a precision of 0.5 parts per million. The current direct measurement and the theoretical prediction of the standard model differ by more than 3.5 standard deviations. Concerning theory, the contribution of the QED and weak interaction to a_mu can be calculated with very high precision in a perturbative approach.rnAt low energies, however, perturbation theory cannot be used to determine the hadronic contribution a^had_mu. On the other hand, a^had_mu may be derived via a dispersion relation from the sum of measured cross sections of exclusive hadronic reactions. Decreasing the experimental uncertainty on these hadronic cross sections is of utmost importance for an improved standard model prediction of a_mu.rnrnIn addition to traditional energy scan experiments, the method of Initial State Radiation (ISR) is used to measure hadronic cross sections. This approach allows experiments at colliders running at a fixed centre-of-mass energy to access smaller effective energies by studying events which contain a high-energetic photon emitted from the initial electron or positron. Using the technique of ISR, the energy range from threshold up to 4.5GeV can be accessed at Babar.rnrnThe cross section e+e- -> pi+pi- contributes with approximately 70% to the hadronic part of the anomalous magnetic moment of the muon a_mu^had. This important channel has been measured with a precision of better than 1%. Therefore, the leading contribution to the uncertainty of a_mu^had at present stems from the invariant mass region between 1GeV and 2GeV. In this energy range, the channels e+e- -> pi+pi-pi+pi- and e+e- -> pi+pi-pi0pi0 dominate the inclusive hadronic cross section. The measurement of the process e+e- -> pi+pi-pi+pi- will be presented in this thesis. This channel has been previously measured by Babar based on 25% of the total dataset. The new analysis includes a more detailed study of the background contamination from other ISR and non-radiative background reactions. In addition, sophisticated studies of the track reconstruction as well as the photon efficiency difference between the data and the simulation of the Babar detector are performed. With these auxiliary studies, a reduction of the systematic uncertainty from 5.0% to 2.4% in the peak region was achieved.rnrnThe pi+pi-pi+pi- final state has a rich internal structure. Hints are seen for the intermediate states rho(770)^0 f_2(1270), rho(770)^0 f_0(980), as well as a_1(1260)pi. In addition, the branching ratios BR(jpsi -> pi+pi-pi+pi-) and BR(psitwos -> jpsi pi+pi-) are extracted.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data sets describing the state of the earth's atmosphere are of great importance in the atmospheric sciences. Over the last decades, the quality and sheer amount of the available data increased significantly, resulting in a rising demand for new tools capable of handling and analysing these large, multidimensional sets of atmospheric data. The interdisciplinary work presented in this thesis covers the development and the application of practical software tools and efficient algorithms from the field of computer science, aiming at the goal of enabling atmospheric scientists to analyse and to gain new insights from these large data sets. For this purpose, our tools combine novel techniques with well-established methods from different areas such as scientific visualization and data segmentation. In this thesis, three practical tools are presented. Two of these tools are software systems (Insight and IWAL) for different types of processing and interactive visualization of data, the third tool is an efficient algorithm for data segmentation implemented as part of Insight.Insight is a toolkit for the interactive, three-dimensional visualization and processing of large sets of atmospheric data, originally developed as a testing environment for the novel segmentation algorithm. It provides a dynamic system for combining at runtime data from different sources, a variety of different data processing algorithms, and several visualization techniques. Its modular architecture and flexible scripting support led to additional applications of the software, from which two examples are presented: the usage of Insight as a WMS (web map service) server, and the automatic production of a sequence of images for the visualization of cyclone simulations. The core application of Insight is the provision of the novel segmentation algorithm for the efficient detection and tracking of 3D features in large sets of atmospheric data, as well as for the precise localization of the occurring genesis, lysis, merging and splitting events. Data segmentation usually leads to a significant reduction of the size of the considered data. This enables a practical visualization of the data, statistical analyses of the features and their events, and the manual or automatic detection of interesting situations for subsequent detailed investigation. The concepts of the novel algorithm, its technical realization, and several extensions for avoiding under- and over-segmentation are discussed. As example applications, this thesis covers the setup and the results of the segmentation of upper-tropospheric jet streams and cyclones as full 3D objects. Finally, IWAL is presented, which is a web application for providing an easy interactive access to meteorological data visualizations, primarily aimed at students. As a web application, the needs to retrieve all input data sets and to install and handle complex visualization tools on a local machine are avoided. The main challenge in the provision of customizable visualizations to large numbers of simultaneous users was to find an acceptable trade-off between the available visualization options and the performance of the application. Besides the implementational details, benchmarks and the results of a user survey are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complex nature of the nucleon-nucleon interaction and the wide range of systems covered by the roughly 3000 known nuclides leads to a multitude of effects observed in nuclear structure. Among the most prominent ones is the occurence of shell closures at so-called ”magic numbers”, which are explained by the nuclear shell model. Although the shell model already is on duty for several decades, it is still constantly extended and improved. For this process of extension, fine adjustment and verification, it is important to have experimental data of nuclear properties, especially at crucial points like in the vicinity of shell closures. This is the motivation for the work performed in this thesis: the measurement and analysis of nuclear ground state properties of the isotopic chain of 100−130Cd by collinear laser spectroscopy.rnrnThe experiment was conducted at ISOLDE/CERN using the collinear laser spectroscopy apparatus COLLAPS. This experiment is the continuation of a run on neutral atomic cadmium from A = 106 to A = 126 and extends the measured isotopes to even more exotic species. The required gain in sensitivity is mainly achieved by using a radiofrequency cooler and buncher for background reduction and by using the strong 5s 2S1/2 → 5p 2P3/2 transition in singly ionized Cd. The latter requires a continuous wave laser system with a wavelength of 214.6 nm, which has been developed during this thesis. Fourth harmonic generation of an infrared titanium sapphire laser is achieved by two subsequent cavity-enhanced second harmonic generations, leading to the production of deep-UV laser light up to about 100 mW.rnrnThe acquired data of the Z = 48 Cd isotopes, having one proton pair less than the Z = 50 shell closure at tin, covers the isotopes from N = 52 up to N = 82 and therefore almost the complete region between the neutron shell closures N = 50 and N = 82. The isotope shifts and the hyperfine structures of these isotopes have been recorded and the magnetic dipole moments, the electric quadrupole moments, spins and changes in mean square charge radii are extracted. The obtained data reveal among other features an extremely linear behaviour of the quadrupole moments of the I = 11/2− isomeric states and a parabolic development in differences in mean square nuclear charge radii between ground and isomeric state. The development of charge radii between the shell closures is smooth, exposes a regular odd-even staggering and can be described and interpreted in the model of Zamick and Thalmi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In dieser Arbeit wurde die paritätsverletzende Asymmetrie in derrnquasielastischen Elektron-Deuteron-Streuung bei Q^2=0.23 (GeV/c)^2 mitrneinem longitudinal polarisierten Elektronstrahl bei einer Energie von 315rnMeV bestimmt. Die Messung erfolgte unter Rückwärtswinkeln. Der Detektor überdeckte einen polaren Streuwinkelbereichrnzwischen 140 und 150 deg. Das Target bestand aus flüssigemrnDeuterium in einer Targetzelle mit einer Länge von 23.4 cm. Dierngemessene paritätsverletzende Asymmetrie beträgt A_{PV}^d = (-20.11 pm 0.87_{stat} pm 1.03_{syst}), wobei der erste Fehler den statistischenrnFehlereitrag und der zweite den systematischen Fehlerbeitrag beschreibt. Ausrnder Kombination dieser Messung mit Messungen der paritätsverletzendenrnAsymmetrie in der elastischen Elektron-Proton-Streuung bei gleichem Q^2rnsowohl bei Vorwärts- als auch bei Rückwärtsmessungen können diernVektor-Strange-Formfaktoren sowie der effektive isovektorielle und isoskalarernVektorstrom des Protons, der die elektroschwachen radiativen Anapolkorrekturenrnenthält, bestimmt werden. Diese Arbeit umfasst ausserdem die Bestimmungrnder Asymmetrien bei einem transversal polarisierten Elektronstrahl sowohl beirneinem Proton- als auch einem Deuterontarget unter Rückwärtswinkeln beirnImpulsüberträgen von Q^2=0.10 (GeV/c)^2, Q^2=0.23 (GeV/c)^2rnund Q^2=0.35 (GeV/c)^2. Die im Experiment beobachteten Asymmetrien werdenrnmit theoretischen Berechnungen verglichen, welche den Imaginärteil der Zweiphoton-Austauschamplitude beinhalten.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ozon (O3) ist ein wichtiges Oxidierungs- und Treibhausgas in der Erdatmosphäre. Es hat Einfluss auf das Klima, die Luftqualität sowie auf die menschliche Gesundheit und die Vegetation. Ökosysteme, wie beispielsweise Wälder, sind Senken für troposphärisches Ozon und werden in Zukunft, bedingt durch Stürme, Pflanzenschädlinge und Änderungen in der Landnutzung, heterogener sein. Es ist anzunehmen, dass diese Heterogenitäten die Aufnahme von Treibhausgasen verringern und signifikante Rückkopplungen auf das Klimasystem bewirken werden. Beeinflusst wird der Atmosphären-Biosphären-Austausch von Ozon durch stomatäre Aufnahme, Deposition auf Pflanzenoberflächen und Böden sowie chemische Umwandlungen. Diese Prozesse zu verstehen und den Ozonaustausch für verschiedene Ökosysteme zu quantifizieren sind Voraussetzungen, um von lokalen Messungen auf regionale Ozonflüsse zu schließen.rnFür die Messung von vertikalen turbulenten Ozonflüssen wird die Eddy Kovarianz Methode genutzt. Die Verwendung von Eddy Kovarianz Systemen mit geschlossenem Pfad, basierend auf schnellen Chemilumineszenz-Ozonsensoren, kann zu Fehlern in der Flussmessung führen. Ein direkter Vergleich von nebeneinander angebrachten Ozonsensoren ermöglichte es einen Einblick in die Faktoren zu erhalten, die die Genauigkeit der Messungen beeinflussen. Systematische Unterschiede zwischen einzelnen Sensoren und der Einfluss von unterschiedlichen Längen des Einlassschlauches wurden untersucht, indem Frequenzspektren analysiert und Korrekturfaktoren für die Ozonflüsse bestimmt wurden. Die experimentell bestimmten Korrekturfaktoren zeigten keinen signifikanten Unterschied zu Korrekturfaktoren, die mithilfe von theoretischen Transferfunktionen bestimmt wurden, wodurch die Anwendbarkeit der theoretisch ermittelten Faktoren zur Korrektur von Ozonflüssen bestätigt wurde.rnIm Sommer 2011 wurden im Rahmen des EGER (ExchanGE processes in mountainous Regions) Projektes Messungen durchgeführt, um zu einem besseren Verständnis des Atmosphären-Biosphären Ozonaustauschs in gestörten Ökosystemen beizutragen. Ozonflüsse wurden auf beiden Seiten einer Waldkante gemessen, die einen Fichtenwald und einen Windwurf trennt. Auf der straßenähnlichen Freifläche, die durch den Sturm "Kyrill" (2007) entstand, entwickelte sich eine Sekundärvegetation, die sich in ihrer Phänologie und Blattphysiologie vom ursprünglich vorherrschenden Fichtenwald unterschied. Der mittlere nächtliche Fluss über dem Fichtenwald war -6 bis -7 nmol m2 s-1 und nahm auf -13 nmol m2 s-1 um die Mittagszeit ab. Die Ozonflüsse zeigten eine deutliche Beziehung zur Pflanzenverdunstung und CO2 Aufnahme, was darauf hinwies, dass während des Tages der Großteil des Ozons von den Pflanzenstomata aufgenommen wurde. Die relativ hohe nächtliche Deposition wurde durch nicht-stomatäre Prozesse verursacht. Die Deposition über dem Wald war im gesamten Tagesverlauf in etwa doppelt so hoch wie über der Freifläche. Dieses Verhältnis stimmte mit dem Verhältnis des Pflanzenflächenindex (PAI) überein. Die Störung des Ökosystems verringerte somit die Fähigkeit des Bewuchses, als Senke für troposphärisches Ozon zu fungieren. Der deutliche Unterschied der Ozonflüsse der beiden Bewuchsarten verdeutlichte die Herausforderung bei der Regionalisierung von Ozonflüssen in heterogen bewaldeten Gebieten.rnDie gemessenen Flüsse wurden darüber hinaus mit Simulationen verglichen, die mit dem Chemiemodell MLC-CHEM durchgeführt wurden. Um das Modell bezüglich der Berechnung von Ozonflüssen zu evaluieren, wurden gemessene und modellierte Flüsse von zwei Positionen im EGER-Gebiet verwendet. Obwohl die Größenordnung der Flüsse übereinstimmte, zeigten die Ergebnisse eine signifikante Differenz zwischen gemessenen und modellierten Flüssen. Zudem gab es eine klare Abhängigkeit der Differenz von der relativen Feuchte, mit abnehmender Differenz bei zunehmender Feuchte, was zeigte, dass das Modell vor einer Verwendung für umfangreiche Studien des Ozonflusses weiterer Verbesserungen bedarf.rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future goal of modern physics is the discovery of physics beyond the Standard Model. One of the most significant hints for New Physics can be seen in the anomalous magnetic moment of the muon - one of the most precise measured variables in modern physics and the main motivation of this work. This variable is associated with the coupling of the muon, an elementary particle, to an external electromagnetic field and is defined as a = (g - 2)/2, whereas g is the gyromagnetic factor of the muon. The muon anomaly has been measured with a relative accuracy of 0.5·10-6. However, a difference between the direct measurement and the Standard Model prediction of 3.6 standard deviations can be observed. This could be a hint for the existence of New Physics. Unfortunately, it is, yet, not significant enough to claim an observation and, thus, more precise measurements and calculations have to be performed.rnThe muon anomaly has three contributions, whereas the ones from quantum electrodynamics and weak interaction can be determined from perturbative calculations. This cannot be done in case of the hadronic contributions at low energies. The leading order contribution - the hadronic vacuum polarization - can be computed via a dispersion integral, which needs as input hadronic cross section measurements from electron-positron annihilations. Hence, it is essential for a precise prediction of the muon anomaly to measure these hadronic cross sections, σ(e+e-→hadrons), with high accuracy. With a contribution of more than 70%, the final state containing two charged pions is the most important one in this context.rnIn this thesis, a new measurement of the σ(e+e-→π+π-) cross section and the pion form factor is performed with an accuracy of 0.9% in the dominant ρ(770) resonance region between 600 and rn900 MeV at the BESIII experiment. The two-pion contribution to the leading-order (LO) hadronic vacuum polarization contribution to (g - 2) from the BESIII result, obtained in this work, is computed to be a(ππ,LO,600-900 MeV) = (368.2±2.5stat±3.3sys)·10-10. With the result presented in this thesis, we make an important contribution on the way to solve the (g - 2) puzzle.