760 resultados para Lipschitz trivial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Nichtlineare Dynamik verallgemeinert Aussagen über dynamische Systeme durch Abstraktion von konkreten Systemen. In der Technik sind Maschinen dagegen sehr konkret und die Behandlung auftretender Probleme mit Methoden der theoretischen Physik ist nicht trivial. Diese Arbeit versucht einige der Schwierigkeiten einer technischen Anwendung der nichtlinearen Theorie zu lokalisieren. Am Beispiel von vier Klassen von Modellansätzen, werden Anwendungsschnittstellen beleuchtet und systematisiert. Die Anwendung von Modellen, die explizit auf bekannten physikalischen Gesetzmäßigkeiten aufbauen, findet Grenzen in der Anzahl der Freiheitsgrade und den Nebenbedingungen konkreter Systeme. Solche Modelle liefern jedoch wichtige Hinweise auf die Vielfalt der nichtlinearen Phänomene und tragen zu ihrem Verständnis bei. Daher sind sie für die Konstruktionspraxis wichtig. Es werden typisch nichtlineare Phänomene und ihre zugrundeliegenden Mechanismen vorgestellt und klassifiziert, sowie grundsätzliche Probleme der Berechenbarkeit analytisch formulierter Modelle betrachtet. Eine zweite Schnittstelle bieten die Darstellungen des Systemverhaltens als überlagerung spezieller Funktionen, diez.B. Symmetrieeigenschaften des betrachteten Systems besonders deutlich widerspiegeln. Gegenüber der klassischen Fourierzerlegung nach Frequenz und Phase bringt die Analyse nach Detaillierungsgrad und Position von Waveletfunktionen wichtige Vorteile für die nichtlineare zustandsraumbasierte Datenanalyse. Viele Verfahren der Nichtlinearen Datenanalyse beruhen auf metrischen Eigenschaften der dynamischen Systeme. Als dritte Gruppe werden demgegenüber topologische Methoden beleuchtet. Die Konstruktion von Simplexen aus Zeitreihen mittels der Zeitversatzmethode ist die Grundlage für eine Triangulation der Zustandsräume. Die Methoden, z.B. Templateverfahren, die auf der Einbettung von eindimensionalen Trajektorien in den R^3 basieren, lassen sich hingegen nicht einfach auf hochdimensionale Zustandsmannigfaltigkeiten anwenden. Schließlich werden stochastische Aspekte behandelt. Schwankungen des Systemverhaltens können auf Schwankungen der Anfangswerte und/oder auf Schwankungen der eigentlichen Systemdynamik beruhen. Die Einordnung des konkreten Anwendungsfalles setzt jedoch ein sicheres Verständnis stochastischer Prozesse voraus. Am Beispiel der Rekonstruktion der stochastischen Dynamik über eine eindimensionale Fokker-Planck-Gleichung zeigen sich deutlich die praktischen Grenzen solcher Ansätze.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In dieser Arbeit wird der Orientierungsglasübergang ungeordneter, molekularer Kristalle untersucht. Die theoretische Behandlung ist durch die Anisotropie der Einteilchen-Verteilungsfunktion und der Paarfunktionen erschwert. Nimmt man ein starres Gitter, wird der reziproke Raum im Gegenzug auf die 1. Brillouin-Zone eingeschränkt. Der Orientierungsglasübergang wird im Rahmen der Modenkopplungsgleichungen studiert, die dazu hergeleitet werden. Als Modell dienen harte Rotationsellipsoide auf einem starren sc Gitter. Zur Berechnung der statischen tensoriellen Strukturfaktoren wird die Ornstein-Zernike(OZ)-Gleichung molekularer Kristalle abgeleitet und selbstkonsistent zusammen mit der von molekularen Flüssigkeiten übernommenen Percus-Yevick(PY)-Näherung gelöst. Parallel dazu werden die Strukturfaktoren durch MC-Simulationen ermittelt. Die OZ-Gleichung molekularer Kristalle ähnelt der von Flüssigkeiten, direkte und totale Korrelationsfunktion kommen jedoch wegen des starren Gitters nur ohne Konstantanteile in den Winkelvariablen vor, im Gegensatz zur PY-Näherung. Die Anisotropie bringt außerdem einen nichttrivialen Zusatzfaktor. OZ/PY-Strukturfaktoren und MC-Ergebnisse stimmen gut überein. Bei den Matrixelementen der Dichte-Dichte-Korrelationsfunktion gibt es drei Hauptverläufe: oszillatorisch, monoton und unregelmäßig abfallend. Oszillationen gehören zu alternierenden Dichtefluktuationen, führen zu Maxima der Strukturfaktoren am Zonenrand und kommen bei oblaten und genügend breiten prolaten, schwächer auch bei dünnen, nicht zu langen prolaten Ellipsoiden vor. Der exponentielle monotone Abfall kommt bei allen Ellipsoiden vor und führt zu Maxima der Strukturfaktoren in der Zonenmitte, was die Tendenz zu nematischer Ordnung zeigt. Die OZ/PY-Theorie ist durch divergierende Maxima der Strukturfaktoren begrenzt. Bei den Modenkopplungsgleichungen molekularer Kristalle zeigt sich eine große Ähnlichkeit mit denen molekularer Flüssigkeiten, jedoch spielen auf einem starrem Gitter nur die Matrixelemente mit l,l' > 0 eine Rolle und es finden Umklapps von reziproken Vektoren statt. Die Anisotropie bringt auch hier nichtkonstante Zusatzfaktoren ins Spiel. Bis auf flache oblate Ellipsoide wird die Modenkopplungs-Glaslinie von der Divergenz der Strukturfaktoren bestimmt. Für sehr lange Ellipsoide müssen die Strukturfaktoren zur Divergenz hin extrapoliert werden. Daher treibt nicht der Orientierungskäfigeffekt den Glasübergang, sondern Fluktuationen an einer Phasengrenze. Nahe der Kugelform ist keine zuverlässige Glasline festlegbar. Die eingefrorenen kritischen Dichte-Dichte-Korrelatoren haben nur in wenigen Fällen die Oszillationen der statischen Korrelatoren. Der monotone Abfall bleibt dagegen für lange Zeiten meist erhalten. Folglich haben die kritischen Modenkopplungs-Nichtergodizitätsparameter abgeschwächte Maxima in der Zonenmitte, während die Maxima am Zonenrand meist verschwunden sind. Die normierten Nichtergodizitätsparameter zeigen eine Fülle von Verläufen, besonders tiefer im Glas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to its high Curie temperature of 420K and band structure calculations predicting 100% spin polarisation, Sr2FeMoO6 is a potential candidate for spintronic devices. However, the preparation of good quality thin films has proven to be a non-trivial task. Epitaxial Sr2FeMoO6 thin films were prepared by pulsed laser deposition on different substrates. Differing from previous reports a post-deposition annealing step at low oxygen partial pressure (10-5 mbar) was introduced and enabled the fabrication of reproducible, high quality samples. According to the structural properties of the substrates the crystal structure and morphology of the thin films are modified. The close interrelation between the structural, magnetic and electronic properties of Sr2FeMoO6 was studied. A detailed evaluation of the results allowed to extract valuable information on the microscopic nature of magnetism and charge transport. Smooth films with a mean roughness of about 2 nm have been achieved, which is a pre-requisite for a possible inclusion of this material in future devices. In order to establish device-oriented sub-micron patterning as a standard technique, electron beam lithography and focussed ion beam etching facilities have been put into operation. A detailed characterisation of these systems has been performed. To determine the technological prospects of new spintronics materials, the verification of a high spin polarisation is of vital interest. A popular technique for this task is point contact Andreev reflection (PCAR). Commonly, the charge transport in a transparent metal-superconductor contact of nanometer dimensions is attributed solely to coherent transport. If this condition is not fulfilled, inelastic processes in the constriction have to be considered. PCAR has been applied to Sr2FeMoO6 and the Heusler compound Co2Cr0.6Fe0.4Al. Systematic deviations between measured spectra and the standard models of PCAR have been observed. Therefore existing approaches have been generalised, in order to include the influence of heating. With the extended model the measured data was successfully reproduced but the analysis has revealed grave implications for the determination of spin polarisation, which was found to break down completely in certain cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis a connection between triply factorised groups and nearrings is investigated. A group G is called triply factorised by its subgroups A, B, and M, if G = AM = BM = AB, where M is normal in G and the intersection of A and B with M is trivial. There is a well-known connection between triply factorised groups and radical rings. If the adjoint group of a radical ring operates on its additive group, the semidirect product of those two groups is triply factorised. On the other hand, if G = AM = BM = AB is a triply factorised group with abelian subgroups A, B, and M, G can be constructed from a suitable radical ring, if the intersection of A and B is trivial. In these triply factorised groups the normal subgroup M is always abelian. In this thesis the construction of triply factorised groups is generalised using nearrings instead of radical rings. Nearrings are a generalisation of rings in the sense that their additive groups need not be abelian and only one distributive law holds. Furthermore, it is shown that every triply factorised group G = AM = BM = AB can be constructed from a nearring if A and B intersect trivially. Moreover, the structure of nearrings is investigated in detail. Especially local nearrings are investigated, since they are important for the construction of triply factorised groups. Given an arbitrary p-group N, a method to construct a local nearring is presented, such that the triply factorised group constructed from this nearring contains N as a subgroup of the normal subgroup M. Finally all local nearrings with dihedral groups of units are classified. It turns out that these nearrings are always finite and their order does not exceed 16.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main scope of my PhD is the reconstruction of the large-scale bivalve phylogeny on the basis of four mitochondrial genes, with samples taken from all major groups of the class. To my knowledge, it is the first attempt of such a breadth in Bivalvia. I decided to focus on both ribosomal and protein coding DNA sequences (two ribosomal encoding genes -12s and 16s -, and two protein coding ones - cytochrome c oxidase I and cytochrome b), since either bibliography and my preliminary results confirmed the importance of combined gene signals in improving evolutionary pathways of the group. Moreover, I wanted to propose a methodological pipeline that proved to be useful to obtain robust results in bivalves phylogeny. Actually, best-performing taxon sampling and alignment strategies were tested, and several data partitioning and molecular evolution models were analyzed, thus demonstrating the importance of molding and implementing non-trivial evolutionary models. In the line of a more rigorous approach to data analysis, I also proposed a new method to assess taxon sampling, by developing Clarke and Warwick statistics: taxon sampling is a major concern in phylogenetic studies, and incomplete, biased, or improper taxon assemblies can lead to misleading results in reconstructing evolutionary trees. Theoretical methods are already available to optimize taxon choice in phylogenetic analyses, but most involve some knowledge about genetic relationships of the group of interest, or even a well-established phylogeny itself; these data are not always available in general phylogenetic applications. The method I proposed measures the "phylogenetic representativeness" of a given sample or set of samples and it is based entirely on the pre-existing available taxonomy of the ingroup, which is commonly known to investigators. Moreover, it also accounts for instability and discordance in taxonomies. A Python-based script suite, called PhyRe, has been developed to implement all analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questa tesi di dottorato è inserita nell’ambito della convenzione tra ARPA_SIMC (che è l’Ente finanziatore), l’Agenzia Regionale di Protezione Civile ed il Dipartimento di Scienze della Terra e Geologico - Ambientali dell’Ateneo di Bologna. L’obiettivo principale è la determinazione di possibili soglie pluviometriche di innesco per i fenomeni franosi in Emilia Romagna che possano essere utilizzate come strumento di supporto previsionale in sala operativa di Protezione Civile. In un contesto geologico così complesso, un approccio empirico tradizionale non è sufficiente per discriminare in modo univoco tra eventi meteo innescanti e non, ed in generale la distribuzione dei dati appare troppo dispersa per poter tracciare una soglia statisticamente significativa. È stato quindi deciso di applicare il rigoroso approccio statistico Bayesiano, innovativo poiché calcola la probabilità di frana dato un certo evento di pioggia (P(A|B)) , considerando non solo le precipitazioni innescanti frane (quindi la probabilità condizionata di avere un certo evento di precipitazione data l’occorrenza di frana, P(B|A)), ma anche le precipitazioni non innescanti (quindi la probabilità a priori di un evento di pioggia, P(A)). L’approccio Bayesiano è stato applicato all’intervallo temporale compreso tra il 1939 ed il 2009. Le isolinee di probabilità ottenute minimizzano i falsi allarmi e sono facilmente implementabili in un sistema di allertamento regionale, ma possono presentare limiti previsionali per fenomeni non rappresentati nel dataset storico o che avvengono in condizioni anomale. Ne sono esempio le frane superficiali con evoluzione in debris flows, estremamente rare negli ultimi 70 anni, ma con frequenza recentemente in aumento. Si è cercato di affrontare questo problema testando la variabilità previsionale di alcuni modelli fisicamente basati appositamente sviluppati a questo scopo, tra cui X – SLIP (Montrasio et al., 1998), SHALSTAB (SHALlow STABility model, Montgomery & Dietrich, 1994), Iverson (2000), TRIGRS 1.0 (Baum et al., 2002), TRIGRS 2.0 (Baum et al., 2008).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to explore, within the framework of the presumably asymptotically safe Quantum Einstein Gravity, quantum corrections to black hole spacetimes, in particular in the case of rotating black holes. We have analysed this problem by exploiting the scale dependent Newton s constant implied by the renormalization group equation for the effective average action, and introducing an appropriate "cutoff identification" which relates the renormalization scale to the geometry of the spacetime manifold. We used these two ingredients in order to "renormalization group improve" the classical Kerr metric that describes the spacetime generated by a rotating black hole. We have focused our investigation on four basic subjects of black hole physics. The main results related to these topics can be summarized as follows. Concerning the critical surfaces, i.e. horizons and static limit surfaces, the improvement leads to a smooth deformation of the classical critical surfaces. Their number remains unchanged. In relation to the Penrose process for energy extraction from black holes, we have found that there exists a non-trivial correlation between regions of negative energy states in the phase space of rotating test particles and configurations of critical surfaces of the black hole. As for the vacuum energy-momentum tensor and the energy conditions we have shown that no model with "normal" matter, in the sense of matter fulfilling the usual energy conditions, can simulate the quantum fluctuations described by the improved Kerr spacetime that we have derived. Finally, in the context of black hole thermodynamics, we have performed calculations of the mass and angular momentum of the improved Kerr black hole, applying the standard Komar integrals. The results reflect the antiscreening character of the quantum fluctuations of the gravitational field. Furthermore we calculated approximations to the entropy and the temperature of the improved Kerr black hole to leading order in the angular momentum. More generally we have proven that the temperature can no longer be proportional to the surface gravity if an entropy-like state function is to exist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the scientific objectives addressed by the Radio Science Experiment hosted on board the ESA mission BepiColombo is the retrieval of the rotational state of planet Mercury. In fact, the estimation of the obliquity and the librations amplitude were proven to be fundamental for constraining the interior composition of Mercury. This is accomplished by the Mercury Orbiter Radio science Experiment (MORE) via a strict interaction among different payloads thus making the experiment particularly challenging. The underlying idea consists in capturing images of the same landmark on the surface of the planet in different epochs in order to observe a displacement of the identified features with respect to a nominal rotation which allows to estimate the rotational parameters. Observations must be planned accurately in order to obtain image pairs carrying the highest information content for the following estimation process. This is not a trivial task especially in light of the several dynamical constraints involved. Another delicate issue is represented by the pattern matching process between image pairs for which the lowest correlation errors are desired. The research activity was conducted in the frame of the MORE rotation experiment and addressed the design and implementation of an end-to-end simulator of the experiment with the final objective of establishing an optimal science planning of the observations. In the thesis, the implementation of the singular modules forming the simulator is illustrated along with the simulations performed. The results obtained from the preliminary release of the optimization algorithm are finally presented although the software implemented is only at a preliminary release and will be improved and refined in the future also taking into account the developments of the mission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we will investigate some properties of one-dimensional quantum systems. From a theoretical point of view quantum models in one dimension are particularly interesting because they are strongly interacting, since particles cannot avoid each other in their motion, and you we can never ignore collisions. Yet, integrable models often generate new and non-trivial solutions, which could not be found perturbatively. In this dissertation we shall focus on two important aspects of integrable one- dimensional models: Their entanglement properties at equilibrium and their dynamical correlators after a quantum quench. The first part of the thesis will be therefore devoted to the study of the entanglement entropy in one- dimensional integrable systems, with a special focus on the XYZ spin-1/2 chain, which, in addition to being integrable, is also an interacting model. We will derive its Renyi entropies in the thermodynamic limit and its behaviour in different phases and for different values of the mass-gap will be analysed. In the second part of the thesis we will instead study the dynamics of correlators after a quantum quench , which represent a powerful tool to measure how perturbations and signals propagate through a quantum chain. The emphasis will be on the Transverse Field Ising Chain and the O(3) non-linear sigma model, which will be both studied by means of a semi-classical approach. Moreover in the last chapter we will demonstrate a general result about the dynamics of correlation functions of local observables after a quantum quench in integrable systems. In particular we will show that if there are not long-range interactions in the final Hamiltonian, then the dynamics of the model (non equal- time correlations) is described by the same statistical ensemble that describes its statical properties (equal-time correlations).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The heavy fermion compound UNi2Al3 exhibits the coexistence of superconductivity and magnetic order at low temperatures, stimulating speculations about possible exotic Cooper-pairing interaction in this superconductor. However, the preparation of good quality bulk single crystals of UNi2Al3 has proven to be a non-trivial task due to metallurgical problems, which result in the formation of an UAl2 impurity phase and hence a strongly reduced sample purity. The present work concentrates on the preparation, characterization and electronic properties investigation of UNi2Al3 single crystalline thin film samples. The preparation of thin films was accomplished in a molecular beam epitaxy (MBE) system. (100)-oriented epitaxial thin films of UNi2Al3 were grown on single crystalline YAlO3 substrates cut in (010)- or (112)-direction. The high crystallographic quality of the samples was proved by several characterisation methods, such as X-ray analysis, RHEED and TEM. To study the magnetic structure of epitaxial thin films resonant magnetic x-ray scattering was employed. The magnetic order of thin the film samples, the formation of magnetic domains with different moment directions, and the magnetic correlation length were discussed. The electronic properties of the UNi2Al3 thin films in the normal and superconducting states were investigated by means of transport measurements. A pronounced anisotropy of the temperature dependent resistivity ρ(T) was observed. Moreover, it was found that the temperature of the resistive superconducting transition depends on the current direction, providing evidence for multiband superconductivity in UNi2Al3. The initial slope of the upper critical field H′c2(T) of the thin film samples suggests an unconventional spin-singlet superconducting state, as opposed to bulk single crystal data. To probe the superconducting gap of UNi2Al3 directly by means of tunnelling spectroscopy many planar junctions of different design employing different techniques were prepared. Despite the tunneling regime of the junctions, no features of the superconducting density of state of UNi2Al3 were ever observed. It is assumed that the absence of UNi2Al3 gap features in the tunneling spectra was caused by imperfections of the tunnelling contacts. The superconductivity of UNi2Al3 was probably suppressed just in a degraded surface layer, resulting in tunneling into non superconducting UNi2Al3. However, alternative explanations such as intrinsic pair breaking effects at the interface to the barrier are also possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diese Arbeit besch"aftigt sich mit algebraischen Zyklen auf komplexen abelschen Variet"aten der Dimension 4. Ziel der Arbeit ist ein nicht-triviales Element in $Griff^{3,2}(A^4)$ zu konstruieren. Hier bezeichnet $A^4$ die emph{generische} abelsche Variet"at der Dimension 4 mit Polarisierung von Typ $(1,2,2,2)$. Die ersten drei Kapitel sind eine Wiederholung von elementaren Definitionen und Begriffen und daher eine Festlegung der Notation. In diesen erinnern wir an elementare Eigenschaften der von Saito definierten Filtrierungen $F_S$ und $Z$ auf den Chowgruppen (vgl. cite{Sa0} und cite{Sa}). Wir wiederholen auch eine Beziehung zwischen der $F_S$-Filtrierung und der Zerlegung von Beauville der Chowgruppen (vgl. cite{Be2} und cite{DeMu}), welche aus cite{Mu} stammt. Die wichtigsten Begriffe in diesem Teil sind die emph{h"ohere Griffiths' Gruppen} und die emph{infinitesimalen Invarianten h"oherer Ordnung}. Dann besch"aftigen wir uns mit emph{verallgemeinerten Prym-Variet"aten} bez"uglich $(2:1)$ "Uberlagerungen von Kurven. Wir geben ihre Konstruktion und wichtige geometrische Eigenschaften und berechnen den Typ ihrer Polarisierung. Kapitel ref{p-moduli} enth"alt ein Resultat aus cite{BCV} "uber die Dominanz der Abbildung $p(3,2):mathcal R(3,2)longrightarrow mathcal A_4(1,2,2,2)$. Dieses Resultat ist von Relevanz f"ur uns, weil es besagt, dass die generische abelsche Variet"at der Dimension 4 mit Polarisierung von Typ $(1,2,2,2)$ eine verallgemeinerte Prym-Variet"at bez"uglich eine $(2:1)$ "Uberlagerung einer Kurve vom Geschlecht $7$ "uber eine Kurve vom Geschlecht $3$ ist. Der zweite Teil der Dissertation ist die eigentliche Arbeit und ist auf folgende Weise strukturiert: Kapitel ref{Deg} enth"alt die Konstruktion der Degeneration von $A^4$. Das bedeutet, dass wir in diesem Kapitel eine Familie $Xlongrightarrow S$ von verallgemeinerten Prym-Variet"aten konstruieren, sodass die klassifizierende Abbildung $Slongrightarrow mathcal A_4(1,2,2,2)$ dominant ist. Desweiteren wird ein relativer Zykel $Y/S$ auf $X/S$ konstruiert zusammen mit einer Untervariet"at $Tsubset S$, sodass wir eine explizite Beschreibung der Einbettung $Yvert _Thookrightarrow Xvert _T$ angeben k"onnen. Das letzte und wichtigste Kapitel enth"ahlt Folgendes: Wir beweisen dass, die emph{ infinitesimale Invariante zweiter Ordnung} $delta _2(alpha)$ von $alpha$ nicht trivial ist. Hier bezeichnet $alpha$ die Komponente von $Y$ in $Ch^3_{(2)}(X/S)$ unter der Beauville-Zerlegung. Damit und mit Hilfe der Ergebnissen aus Kapitel ref{Cohm} k"onnen wir zeigen, dass [ 0neq [alpha ] in Griff ^{3,2}(X/S) . ] Wir k"onnen diese Aussage verfeinern und zeigen (vgl. Theorem ref{a4}) begin{theorem}label{maintheorem} F"ur $sin S$ generisch gilt [ 0neq [alpha _s ]in Griff ^{3,2}(A^4) , ] wobei $A^4$ die generische abelsche Variet"at der Dimension $4$ mit Polarisierung vom Typ $(1,2,2,2)$ ist. end{theorem}

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The efficient emulation of a many-core architecture is a challenging task, each core could be emulated through a dedicated thread and such threads would be interleaved on an either single-core or a multi-core processor. The high number of context switches will results in an unacceptable performance. To support this kind of application, the GPU computational power is exploited in order to schedule the emulation threads on the GPU cores. This presents a non trivial divergence issue, since GPU computational power is offered through SIMD processing elements, that are forced to synchronously execute the same instruction on different memory portions. Thus, a new emulation technique is introduced in order to overcome this limitation: instead of providing a routine for each ISA opcode, the emulator mimics the behavior of the Micro Architecture level, here instructions are date that a unique routine takes as input. Our new technique has been implemented and compared with the classic emulation approach, in order to investigate the chance of a hybrid solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].