10 resultados para Extraction of premolars

em ArchiMeD - Elektronische Publikationen der Universität Mainz - Alemanha


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fundamental aim in our investigation of the interaction of a polymer film with a nanoparticle is the extraction of information on the dynamics of the liquid using a single tracking particle. In this work two theoretical methods were used: one passive, where the motion of the particle measures the dynamics of the liquid, one active, where perturbations in the system are introduced through the particle. In the first part of this investigation a thin polymeric film on a substrate is studied using molecular dynamics simulations. The polymer is modeled via a 'bead spring' model. The particle is spheric and non structured and is able to interact with the monomers via a Lennard Jones potential. The system is micro-canonical and simulations were performed for average temperatures between the glass transition temperature of the film and its dewetting temperature. It is shown that the stability of the nanoparticle on the polymer film in the absence of gravity depends strongly on the form of the chosen interaction potential between nanoparticle and polymer. The relative position of the tracking particle to the liquid vapor interface of the polymer film shows the glass transition of the latter. The velocity correlation function and the mean square displacement of the particle has shown that it is caged when the temperature is close to the glass transition temperature. The analysis of the dynamics at long times shows the coupling of the nanoparticle to the center of mass of the polymer chains. The use of the Stokes-Einstein formula, which relates the diffusion coefficient to the viscosity, permits to use the nanoparticle as a probe for the determination of the bulk viscosity of the melt, the so called 'microrheology'. It is shown that for low frequencies the result obtained using microrheology coincides with the results of the Rouse model applied to the polymer dynamics. In the second part of this investigation the equations of Linear Hydrodynamics are solved for a nanoparticle oscillating above the film. It is shown that compressible liquids have mechanical response to external perturbations induced with the nanoparticle. These solutions show strong velocity and pressure profiles of the liquid near the interface, as well as a mechanical response of the liquid-vapor interface. The results obtained with this calculations can be employed for the interpretation of experimental results of non contact AFM microscopy

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phylogeography is a recent field of biological research that links phylogenetics to biogeography through deciphering the imprint that evolutionary history has left on the genetic structure of extant populations. During the cold phases of the successive ice ages, which drastically shaped species’ distributions since the Pliocene, populations of numerous species were isolated in refugia where many of them evolved into different genetic lineages. My dissertation deals with the phylogeography of the Woodland Ringlet (Erebia medusa [Denis and Schiffermüller] 1775) in Central and Eastern Europe. This Palaearctic butterfly species is currently distributed from central France and south eastern Belgium over large parts of Central Europe and southern Siberia to the Pacific. It is absent from those parts of Europe with mediterranean, oceanic and boreal climates. It was supposed to be a Siberian faunal element with a rather homogeneous population structure in Central Europe due to its postglacial expansion out of a single eastern refugium. An already existing evolutionary scenario for the Woodland Ringlet in Central and Eastern Europe is based on nuclear data (allozymes). To know if this is corroborated by organelle evolutionary history, I sequenced two mitochondrial markers (part of the cytochrome oxydase subunit one and the control region) for populations sampled over the same area. Phylogeography largely relies on the construction of networks of uniparentally inherited haplotypes that are compared to geographic haplotype distribution thanks to recent developed methods such as nested clade phylogeographic analysis (NCPA). Several ring-shaped ambiguities (loops) emerged from both haplotype networks in E. medusa. They can be attributed to recombination and homoplasy. Such loops usually avert the straightforward extraction of the phylogeographic signal contained in a gene tree. I developed several new approaches to extract phylogeographic information in the presence of loops, considering either homoplasy or recombination. This allowed me to deduce a consistent evolutionary history for the species from the mitochondrial data and also adds plausibility for the occurrence of recombination in E. medusa mitochondria. Despite the fact that the control region is assumed to have a lack of resolving power in other species, I found a considerable genetic variation of this marker in E. medusa which makes it a useful tool for phylogeographic studies. In combination with the allozyme data, the mitochondrial genome supports the following phylogeographic scenario for E. medusa in Europe: (i) a first vicariance, due to the onset of the Würm glaciation, led to the formation of several major lineages, and is mirrored in the NCPA by restricted gene flow, (ii) later on further vicariances led to the formation of two sub-lineages in the Western lineage and two sub-lineages in the Eastern lineage during the Last Glacial Maximum or Older Dryas; additionally the NCPA supports a restriction of gene flow with isolation by distance, (iii) finally, vicariance resulted in two secondary sub-lineages in the area of Germany and, maybe, to two other secondary sub-lineages in the Czech Republic. The last postglacial warming was accompanied by strong range expansions in most of the genetic lineages. The scenario expected for a presumably Siberian faunal element such as E. medusa is a continuous loss of genetic diversity during postglacial westward expansion. Hence, the pattern found in this thesis contradicts a typical Siberian origin of E. medusa. In contrast, it corroboratess the importance of multiple extra-Mediterranean refugia for European fauna as it was recently assumed for other continental species.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nuclear charge radii of short-lived isotopes can be probed in a nuclear-model independent way via isotope shift measurements. For this purpose a novel technique was developed at GSI, Darmstadt. It combines two-photon laser spectroscopy in the 2s-3s electronic transition of lithium, resonance ionization, and detection via quadrupole mass spectrometry. In this way an accuracy of 5e-5 which is necessary for the extraction of nuclear charge radii, and an overall detection efficiency of 1e-4 is reached. This allowed an isotope shift measurement of Li-11 for the first time at the TRIUMF facility in Vancouver. Additionally, uncertainties in the isotope shift for all other lithium isotopes were reduced by about a factor of four compared to previous measurements at GSI. Results were combined with recent theoretical mass shift calculations in three-electron systems and root-mean-square nuclear charge radii of all lithium isotopes, particulary of the two-neutron halo nucleus Li-11, were determined. Obtained charge radii decrease continuously from Li-6 to Li-9, while a strong increase between Li-9 and Li-11 is observed. This is compared to predictions of various nuclear models and it is found that a multicluster model gives the best overall agreement. Within this model, the increase in charge radius between Li-9 and Li-11is to a large extend caused by intrinsic excitation of the Li-9-like core while the neutron-halo correlation contributes only to a small extend.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In den letzten drei Jahrzehnten sind Fernerkundung und GIS in den Geowissenschaften zunehmend wichtiger geworden, um die konventionellen Methoden von Datensammlung und zur Herstellung von Landkarten zu verbessern. Die vorliegende Arbeit befasst sich mit der Anwendung von Fernerkundung und geographischen Informationssystemen (GIS) für geomorphologische Untersuchungen. Durch die Kombination beider Techniken ist es vor allem möglich geworden, geomorphologische Formen im Überblick und dennoch detailliert zu erfassen. Als Grundlagen werden in dieser Arbeit topographische und geologische Karten, Satellitenbilder und Klimadaten benutzt. Die Arbeit besteht aus 6 Kapiteln. Das erste Kapitel gibt einen allgemeinen Überblick über den Untersuchungsraum. Dieser umfasst folgende morphologische Einheiten, klimatischen Verhältnisse, insbesondere die Ariditätsindizes der Küsten- und Gebirgslandschaft sowie das Siedlungsmuster beschrieben. Kapitel 2 befasst sich mit der regionalen Geologie und Stratigraphie des Untersuchungsraumes. Es wird versucht, die Hauptformationen mit Hilfe von ETM-Satellitenbildern zu identifizieren. Angewandt werden hierzu folgende Methoden: Colour Band Composite, Image Rationing und die sog. überwachte Klassifikation. Kapitel 3 enthält eine Beschreibung der strukturell bedingten Oberflächenformen, um die Wechselwirkung zwischen Tektonik und geomorphologischen Prozessen aufzuklären. Es geht es um die vielfältigen Methoden, zum Beispiel das sog. Image Processing, um die im Gebirgskörper vorhandenen Lineamente einwandfrei zu deuten. Spezielle Filtermethoden werden angewandt, um die wichtigsten Lineamente zu kartieren. Kapitel 4 stellt den Versuch dar, mit Hilfe von aufbereiteten SRTM-Satellitenbildern eine automatisierte Erfassung des Gewässernetzes. Es wird ausführlich diskutiert, inwieweit bei diesen Arbeitsschritten die Qualität kleinmaßstäbiger SRTM-Satellitenbilder mit großmaßstäbigen topographischen Karten vergleichbar ist. Weiterhin werden hydrologische Parameter über eine qualitative und quantitative Analyse des Abflussregimes einzelner Wadis erfasst. Der Ursprung von Entwässerungssystemen wird auf der Basis geomorphologischer und geologischer Befunde interpretiert. Kapitel 5 befasst sich mit der Abschätzung der Gefahr episodischer Wadifluten. Die Wahrscheinlichkeit ihres jährlichen Auftretens bzw. des Auftretens starker Fluten im Abstand mehrerer Jahre wird in einer historischen Betrachtung bis 1921 zurückverfolgt. Die Bedeutung von Regentiefs, die sich über dem Roten Meer entwickeln, und die für eine Abflussbildung in Frage kommen, wird mit Hilfe der IDW-Methode (Inverse Distance Weighted) untersucht. Betrachtet werden außerdem weitere, regenbringende Wetterlagen mit Hilfe von Meteosat Infrarotbildern. Genauer betrachtet wird die Periode 1990-1997, in der kräftige, Wadifluten auslösende Regenfälle auftraten. Flutereignisse und Fluthöhe werden anhand von hydrographischen Daten (Pegelmessungen) ermittelt. Auch die Landnutzung und Siedlungsstruktur im Einzugsgebiet eines Wadis wird berücksichtigt. In Kapitel 6 geht es um die unterschiedlichen Küstenformen auf der Westseite des Roten Meeres zum Beispiel die Erosionsformen, Aufbauformen, untergetauchte Formen. Im abschließenden Teil geht es um die Stratigraphie und zeitliche Zuordnung von submarinen Terrassen auf Korallenriffen sowie den Vergleich mit anderen solcher Terrassen an der ägyptischen Rotmeerküste westlich und östlich der Sinai-Halbinsel.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The only nuclear model independent method for the determination of nuclear charge radii of short-lived radioactive isotopes is the measurement of the isotope shift. For light elements (Z < 10) extremely high accuracy in experiment and theory is required and was only reached for He and Li so far. The nuclear charge radii of the lightest elements are of great interest because they have isotopes which exhibit so-called halo nuclei. Those nuclei are characterized by a a very exotic nuclear structure: They have a compact core and an area of less dense nuclear matter that extends far from this core. Examples for halo nuclei are 6^He, 8^He, 11^Li and 11^Be that is investigated in this thesis. Furthermore these isotopes are of interest because up to now only for such systems with a few nucleons the nuclear structure can be calculated ab-initio. In the Institut für Kernchemie at the Johannes Gutenberg-Universität Mainz two approaches with different accuracy were developed. The goal of these approaches was the measurement of the isotope shifts between (7,10,11)^Be^+ and 9^Be^+ in the D1 line. The …first approach is laser spectroscopy on laser cooled Be^+ ions that are trapped in a linear Paul trap. The accessible accuracy should be in the order of some 100 kHz. In this thesis two types of linear Paul traps were developed for this purpose. Moreover, the peripheral experimental setup was simulated and constructed. It allows the efficient deceleration of fast ions with an initial energy of 60 keV down to some eV and an effcient transport into the ion trap. For one of the Paul traps the ion trapping could already be demonstrated, while the optical detection of captured 9^Be^+ ions could not be completed, because the development work was delayed by the second approach. The second approach uses the technique of collinear laser spectroscopy that was already applied in the last 30 years for measuring isotope shifts of plenty of heavier isotopes. For light elements (Z < 10), it was so far not possible to reach the accuracy that is required to extract information about nuclear charge radii. The combination of collinear laser spectroscopy with the most modern methods of frequency metrology …finally permitted the …first-time determination of the nuclear charge radii of (7,10)^Be and the one neutron halo nucleus 11^Be at the COLLAPS experiment at ISOLDE/ CERN. In the course of the work reported in this thesis it was possible to measure the absolute transition frequencies and the isotope shifts in the D1 line for the Be isotopes mentioned above with an accuracy of better than 2 MHz. Combination with the most recent calculations of the mass effect allowed the extraction of the nuclear charge radii of (7,10,11)^Be with an relative accuracy better than 1%. The nuclear charge radius decreases from 7^Be continuously to 10^Be and increases again for 11^Be. This result is compared with predictions of ab-initio nuclear models which reproduce the observed trend. Particularly the "Greens Function Monte Carlo" and the "Fermionic Molecular Dynamic" model show very good agreement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The electric dipole response of neutron-rich nickel isotopes has been investigated using the LAND setup at GSI in Darmstadt (Germany). Relativistic secondary beams of 56−57Ni and 67−72Ni at approximately 500 AMeV have been generated using projectile fragmentation of stable ions on a 4 g/cm2 Be target and subsequent separation in the magnetic dipole fields of the FRagment Separator (FRS). After reaching the LAND setup in Cave C, the radioactive ions were excited electromagnetically in the electric field of a Pb target. The decay products have been measured in inverse kinematics using various detectors. Neutron-rich 67−69Ni isotopes decay by the emission of neutrons, which are detected in the LAND detector. The present analysis concentrates on the (gamma,n) and (gamma,2n) channels in these nuclei, since the proton and three-neutron thresholds are unlikely to be reached considering the virtual photon spectrum for nickel ions at 500 AMeV. A measurement of the stable 58Ni isotope is used as a benchmark to check the accuracy of the present results with previously published data. The measured (gamma,n) and (gamma,np) channels are compared with an inclusive photoneutron measurement by Fultz and coworkers, which are consistent within the respective errors. The measured excitation energy distributions of 67−69Ni contain a large portion of the Giant Dipole Resonance (GDR) strength predicted by the Thomas-Reiche-Kuhn energy-weighted sum rule, as well as a significant amount of low-lying E1 strength, that cannot be attributed to the GDR alone. The GDR distribution parameters are calculated using well-established semi-empirical systematic models, providing the peak energies and widths. The GDR strength is extracted from the chi-square minimization of the model GDR to the measured data of the (gamma,2n) channel, thereby excluding any influence of eventual low-lying strength. The subtraction of the obtained GDR distribution from the total measured E1 strength provides the low-lying E1 strength distribution, which is attributed to the Pygmy Dipole Resonance (PDR). The extraction of the peak energy, width and strength is performed using a Gaussian function. The minimization of trial Gaussian distributions to the data does not converge towards a sharp minimum. Therefore, the results are presented by a chi-square distribution as a function of all three Gaussian parameters. Various predictions of PDR distributions exist, as well as a recent measurement of the 68Ni pygmy dipole-resonance obtained by virtual photon scattering, to which the present pygmy dipole-resonance distribution is also compared.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ziel dieser Dissertation ist die experimentelle Charakterisierung und quantitative Beschreibung der Hybridisierung von komplementären Nukleinsäuresträngen mit oberflächengebundenen Fängermolekülen für die Entwicklung von integrierten Biosensoren. Im Gegensatz zu lösungsbasierten Verfahren ist mit Microarray Substraten die Untersuchung vieler Nukleinsäurekombinationen parallel möglich. Als biologisch relevantes Evaluierungssystem wurde das in Eukaryoten universell exprimierte Actin Gen aus unterschiedlichen Pflanzenspezies verwendet. Dieses Testsystem ermöglicht es, nahe verwandte Pflanzenarten auf Grund von geringen Unterschieden in der Gen-Sequenz (SNPs) zu charakterisieren. Aufbauend auf dieses gut studierte Modell eines House-Keeping Genes wurde ein umfassendes Microarray System, bestehend aus kurzen und langen Oligonukleotiden (mit eingebauten LNA-Molekülen), cDNAs sowie DNA und RNA Targets realisiert. Damit konnte ein für online Messung optimiertes Testsystem mit hohen Signalstärken entwickelt werden. Basierend auf den Ergebnissen wurde der gesamte Signalpfad von Nukleinsärekonzentration bis zum digitalen Wert modelliert. Die aus der Entwicklung und den Experimenten gewonnen Erkenntnisse über die Kinetik und Thermodynamik von Hybridisierung sind in drei Publikationen zusammengefasst die das Rückgrat dieser Dissertation bilden. Die erste Publikation beschreibt die Verbesserung der Reproduzierbarkeit und Spezifizität von Microarray Ergebnissen durch online Messung von Kinetik und Thermodynamik gegenüber endpunktbasierten Messungen mit Standard Microarrays. Für die Auswertung der riesigen Datenmengen wurden zwei Algorithmen entwickelt, eine reaktionskinetische Modellierung der Isothermen und ein auf der Fermi-Dirac Statistik beruhende Beschreibung des Schmelzüberganges. Diese Algorithmen werden in der zweiten Publikation beschrieben. Durch die Realisierung von gleichen Sequenzen in den chemisch unterschiedlichen Nukleinsäuren (DNA, RNA und LNA) ist es möglich, definierte Unterschiede in der Konformation des Riboserings und der C5-Methylgruppe der Pyrimidine zu untersuchen. Die kompetitive Wechselwirkung dieser unterschiedlichen Nukleinsäuren gleicher Sequenz und die Auswirkungen auf Kinetik und Thermodynamik ist das Thema der dritten Publikation. Neben der molekularbiologischen und technologischen Entwicklung im Bereich der Sensorik von Hybridisierungsreaktionen oberflächengebundener Nukleinsäuremolekülen, der automatisierten Auswertung und Modellierung der anfallenden Datenmengen und der damit verbundenen besseren quantitativen Beschreibung von Kinetik und Thermodynamik dieser Reaktionen tragen die Ergebnisse zum besseren Verständnis der physikalisch-chemischen Struktur des elementarsten biologischen Moleküls und seiner nach wie vor nicht vollständig verstandenen Spezifizität bei.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lattice Quantum Chromodynamics (LQCD) is the preferred tool for obtaining non-perturbative results from QCD in the low-energy regime. It has by nowrnentered the era in which high precision calculations for a number of phenomenologically relevant observables at the physical point, with dynamical quark degrees of freedom and controlled systematics, become feasible. Despite these successes there are still quantities where control of systematic effects is insufficient. The subject of this thesis is the exploration of the potential of todays state-of-the-art simulation algorithms for non-perturbativelyrn$\mathcal{O}(a)$-improved Wilson fermions to produce reliable results in thernchiral regime and at the physical point both for zero and non-zero temperature. Important in this context is the control over the chiral extrapolation. Thisrnthesis is concerned with two particular topics, namely the computation of hadronic form factors at zero temperature, and the properties of the phaserntransition in the chiral limit of two-flavour QCD.rnrnThe electromagnetic iso-vector form factor of the pion provides a platform to study systematic effects and the chiral extrapolation for observables connected to the structure of mesons (and baryons). Mesonic form factors are computationally simpler than their baryonic counterparts but share most of the systematic effects. This thesis contains a comprehensive study of the form factor in the regime of low momentum transfer $q^2$, where the form factor is connected to the charge radius of the pion. A particular emphasis is on the region very close to $q^2=0$ which has not been explored so far, neither in experiment nor in LQCD. The results for the form factor close the gap between the smallest spacelike $q^2$-value available so far and $q^2=0$, and reach an unprecedented accuracy at full control over the main systematic effects. This enables the model-independent extraction of the pion charge radius. The results for the form factor and the charge radius are used to test chiral perturbation theory ($\chi$PT) and are thereby extrapolated to the physical point and the continuum. The final result in units of the hadronic radius $r_0$ is rn$$ \left\langle r_\pi^2 \right\rangle^{\rm phys}/r_0^2 = 1.87 \: \left(^{+12}_{-10}\right)\left(^{+\:4}_{-15}\right) \quad \textnormal{or} \quad \left\langle r_\pi^2 \right\rangle^{\rm phys} = 0.473 \: \left(^{+30}_{-26}\right)\left(^{+10}_{-38}\right)(10) \: \textnormal{fm} \;, $$rn which agrees well with the results from other measurements in LQCD and experiment. Note, that this is the first continuum extrapolated result for the charge radius from LQCD which has been extracted from measurements of the form factor in the region of small $q^2$.rnrnThe order of the phase transition in the chiral limit of two-flavour QCD and the associated transition temperature are the last unkown features of the phase diagram at zero chemical potential. The two possible scenarios are a second order transition in the $O(4)$-universality class or a first order transition. Since direct simulations in the chiral limit are not possible the transition can only be investigated by simulating at non-zero quark mass with a subsequent chiral extrapolation, guided by the universal scaling in the vicinity of the critical point. The thesis presents the setup and first results from a study on this topic. The study provides the ideal platform to test the potential and limits of todays simulation algorithms at finite temperature. The results from a first scan at a constant zero-temperature pion mass of about 290~MeV are promising, and it appears that simulations down to physical quark masses are feasible. Of particular relevance for the order of the chiral transition is the strength of the anomalous breaking of the $U_A(1)$ symmetry at the transition point. It can be studied by looking at the degeneracies of the correlation functions in scalar and pseudoscalar channels. For the temperature scan reported in this thesis the breaking is still pronounced in the transition region and the symmetry becomes effectively restored only above $1.16\:T_C$. The thesis also provides an extensive outline of research perspectives and includes a generalisation of the standard multi-histogram method to explicitly $\beta$-dependent fermion actions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Zeitreihen sind allgegenwärtig. Die Erfassung und Verarbeitung kontinuierlich gemessener Daten ist in allen Bereichen der Naturwissenschaften, Medizin und Finanzwelt vertreten. Das enorme Anwachsen aufgezeichneter Datenmengen, sei es durch automatisierte Monitoring-Systeme oder integrierte Sensoren, bedarf außerordentlich schneller Algorithmen in Theorie und Praxis. Infolgedessen beschäftigt sich diese Arbeit mit der effizienten Berechnung von Teilsequenzalignments. Komplexe Algorithmen wie z.B. Anomaliedetektion, Motivfabfrage oder die unüberwachte Extraktion von prototypischen Bausteinen in Zeitreihen machen exzessiven Gebrauch von diesen Alignments. Darin begründet sich der Bedarf nach schnellen Implementierungen. Diese Arbeit untergliedert sich in drei Ansätze, die sich dieser Herausforderung widmen. Das umfasst vier Alignierungsalgorithmen und ihre Parallelisierung auf CUDA-fähiger Hardware, einen Algorithmus zur Segmentierung von Datenströmen und eine einheitliche Behandlung von Liegruppen-wertigen Zeitreihen.rnrnDer erste Beitrag ist eine vollständige CUDA-Portierung der UCR-Suite, die weltführende Implementierung von Teilsequenzalignierung. Das umfasst ein neues Berechnungsschema zur Ermittlung lokaler Alignierungsgüten unter Verwendung z-normierten euklidischen Abstands, welches auf jeder parallelen Hardware mit Unterstützung für schnelle Fouriertransformation einsetzbar ist. Des Weiteren geben wir eine SIMT-verträgliche Umsetzung der Lower-Bound-Kaskade der UCR-Suite zur effizienten Berechnung lokaler Alignierungsgüten unter Dynamic Time Warping an. Beide CUDA-Implementierungen ermöglichen eine um ein bis zwei Größenordnungen schnellere Berechnung als etablierte Methoden.rnrnAls zweites untersuchen wir zwei Linearzeit-Approximierungen für das elastische Alignment von Teilsequenzen. Auf der einen Seite behandeln wir ein SIMT-verträgliches Relaxierungschema für Greedy DTW und seine effiziente CUDA-Parallelisierung. Auf der anderen Seite führen wir ein neues lokales Abstandsmaß ein, den Gliding Elastic Match (GEM), welches mit der gleichen asymptotischen Zeitkomplexität wie Greedy DTW berechnet werden kann, jedoch eine vollständige Relaxierung der Penalty-Matrix bietet. Weitere Verbesserungen umfassen Invarianz gegen Trends auf der Messachse und uniforme Skalierung auf der Zeitachse. Des Weiteren wird eine Erweiterung von GEM zur Multi-Shape-Segmentierung diskutiert und auf Bewegungsdaten evaluiert. Beide CUDA-Parallelisierung verzeichnen Laufzeitverbesserungen um bis zu zwei Größenordnungen.rnrnDie Behandlung von Zeitreihen beschränkt sich in der Literatur in der Regel auf reellwertige Messdaten. Der dritte Beitrag umfasst eine einheitliche Methode zur Behandlung von Liegruppen-wertigen Zeitreihen. Darauf aufbauend werden Distanzmaße auf der Rotationsgruppe SO(3) und auf der euklidischen Gruppe SE(3) behandelt. Des Weiteren werden speichereffiziente Darstellungen und gruppenkompatible Erweiterungen elastischer Maße diskutiert.