969 resultados para Algebraic renormalization
Resumo:
The aim of this paper is to indicate how TOSCANA may be extended to allow graphical representations not only of concept lattices but also of concept graphs in the sense of Contextual Logic. The contextual-logic extension of TOSCANA requires the logical scaling of conceptual and relatioal scales for which we propose the Peircean Algebraic Logic as reconstructed by R. W. Burch. As graphical representations we recommend, besides labelled line diagrams of concept lattices and Sowa's diagrams of conceptual graphs, particular information maps for utilizing background knowledge as much as possible. Our considerations are illustrated by a small information system about the domestic flights in Austria.
Resumo:
The aim of this paper is the numerical treatment of a boundary value problem for the system of Stokes' equations. For this we extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the system of Stokes' equations in two dimensions. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.
Resumo:
The present dissertation is devoted to the construction of exact and approximate analytical solutions of the problem of light propagation in highly nonlinear media. It is demonstrated that for many experimental conditions, the problem can be studied under the geometrical optics approximation with a sufficient accuracy. Based on the renormalization group symmetry analysis, exact analytical solutions of the eikonal equations with a higher order refractive index are constructed. A new analytical approach to the construction of approximate solutions is suggested. Based on it, approximate solutions for various boundary conditions, nonlinear refractive indices and dimensions are constructed. Exact analytical expressions for the nonlinear self-focusing positions are deduced. On the basis of the obtained solutions a general rule for the single filament intensity is derived; it is demonstrated that the scaling law (the functional dependence of the self-focusing position on the peak beam intensity) is defined by a form of the nonlinear refractive index but not the beam shape at the boundary. Comparisons of the obtained solutions with results of experiments and numerical simulations are discussed.
Resumo:
Diese Arbeit beschäftigt sich mit der Frage, wie sich in einer Familie von abelschen t-Moduln die Teilfamilie der uniformisierbaren t-Moduln beschreiben lässt. Abelsche t-Moduln sind höherdimensionale Verallgemeinerungen von Drinfeld-Moduln über algebraischen Funktionenkörpern. Bekanntermaßen lassen sich Drinfeld-Moduln in allgemeiner Charakteristik durch analytische Tori parametrisieren. Diese Tatsache überträgt sich allerdings nur auf manche t-Moduln, die man als uniformisierbar bezeichnet. Die Situation hat eine gewisse Analogie zur Theorie von elliptischen Kurven, Tori und abelschen Varietäten über den komplexen Zahlen. Um zu entscheiden, ob ein t-Modul in diesem Sinne uniformisierbar ist, wendet man ein Kriterium von Anderson an, das die rigide analytische Trivialität der zugehörigen t-Motive zum Inhalt hat. Wir wenden dieses Kriterium auf eine Familie von zweidimensionalen t-Moduln vom Rang vier an, die von Koeffizienten a,b,c,d abhängen, und gelangen dabei zur äquivalenten Fragestellung nach der Konvergenz von gewissen rekursiv definierten Folgen. Das Konvergenzverhalten dieser Folgen lässt sich mit Hilfe von Newtonpolygonen gut untersuchen. Schließlich erhält man durch dieses Vorgehen einfach formulierte Bedingungen an die Koeffizienten a,b,c,d, die einerseits die Uniformisierbarkeit garantieren oder andererseits diese ausschließen.
Resumo:
The present thesis is about the inverse problem in differential Galois Theory. Given a differential field, the inverse problem asks which linear algebraic groups can be realized as differential Galois groups of Picard-Vessiot extensions of this field. In this thesis we will concentrate on the realization of the classical groups as differential Galois groups. We introduce a method for a very general realization of these groups. This means that we present for the classical groups of Lie rank $l$ explicit linear differential equations where the coefficients are differential polynomials in $l$ differential indeterminates over an algebraically closed field of constants $C$, i.e. our differential ground field is purely differential transcendental over the constants. For the groups of type $A_l$, $B_l$, $C_l$, $D_l$ and $G_2$ we managed to do these realizations at the same time in terms of Abhyankar's program 'Nice Equations for Nice Groups'. Here the choice of the defining matrix is important. We found out that an educated choice of $l$ negative roots for the parametrization together with the positive simple roots leads to a nice differential equation and at the same time defines a sufficiently general element of the Lie algebra. Unfortunately for the groups of type $F_4$ and $E_6$ the linear differential equations for such elements are of enormous length. Therefore we keep in the case of $F_4$ and $E_6$ the defining matrix differential equation which has also an easy and nice shape. The basic idea for the realization is the application of an upper and lower bound criterion for the differential Galois group to our parameter equations and to show that both bounds coincide. An upper and lower bound criterion can be found in literature. Here we will only use the upper bound, since for the application of the lower bound criterion an important condition has to be satisfied. If the differential ground field is $C_1$, e.g., $C(z)$ with standard derivation, this condition is automatically satisfied. Since our differential ground field is purely differential transcendental over $C$, we have no information whether this condition holds or not. The main part of this thesis is the development of an alternative lower bound criterion and its application. We introduce the specialization bound. It states that the differential Galois group of a specialization of the parameter equation is contained in the differential Galois group of the parameter equation. Thus for its application we need a differential equation over $C(z)$ with given differential Galois group. A modification of a result from Mitschi and Singer yields such an equation over $C(z)$ up to differential conjugation, i.e. up to transformation to the required shape. The transformation of their equation to a specialization of our parameter equation is done for each of the above groups in the respective transformation lemma.
Resumo:
A large class of special functions are solutions of systems of linear difference and differential equations with polynomial coefficients. For a given function, these equations considered as operator polynomials generate a left ideal in a noncommutative algebra called Ore algebra. This ideal with finitely many conditions characterizes the function uniquely so that Gröbner basis techniques can be applied. Many problems related to special functions which can be described by such ideals can be solved by performing elimination of appropriate noncommutative variables in these ideals. In this work, we mainly achieve the following: 1. We give an overview of the theoretical algebraic background as well as the algorithmic aspects of different methods using noncommutative Gröbner elimination techniques in Ore algebras in order to solve problems related to special functions. 2. We describe in detail algorithms which are based on Gröbner elimination techniques and perform the creative telescoping method for sums and integrals of special functions. 3. We investigate and compare these algorithms by illustrative examples which are performed by the computer algebra system Maple. This investigation has the objective to test how far noncommutative Gröbner elimination techniques may be efficiently applied to perform creative telescoping.
Resumo:
In der algebraischen Kryptoanalyse werden moderne Kryptosysteme als polynomielle, nichtlineare Gleichungssysteme dargestellt. Das Lösen solcher Gleichungssysteme ist NP-hart. Es gibt also keinen Algorithmus, der in polynomieller Zeit ein beliebiges nichtlineares Gleichungssystem löst. Dennoch kann man aus modernen Kryptosystemen Gleichungssysteme mit viel Struktur generieren. So sind diese Gleichungssysteme bei geeigneter Modellierung quadratisch und dünn besetzt, damit nicht beliebig. Dafür gibt es spezielle Algorithmen, die eine Lösung solcher Gleichungssysteme finden. Ein Beispiel dafür ist der ElimLin-Algorithmus, der mit Hilfe von linearen Gleichungen das Gleichungssystem iterativ vereinfacht. In der Dissertation wird auf Basis dieses Algorithmus ein neuer Solver für quadratische, dünn besetzte Gleichungssysteme vorgestellt und damit zwei symmetrische Kryptosysteme angegriffen. Dabei sind die Techniken zur Modellierung der Chiffren von entscheidender Bedeutung, so das neue Techniken entwickelt werden, um Kryptosysteme darzustellen. Die Idee für das Modell kommt von Cube-Angriffen. Diese Angriffe sind besonders wirksam gegen Stromchiffren. In der Arbeit werden unterschiedliche Varianten klassifiziert und mögliche Erweiterungen vorgestellt. Das entstandene Modell hingegen, lässt sich auch erfolgreich auf Blockchiffren und auch auf andere Szenarien erweitern. Bei diesen Änderungen muss das Modell nur geringfügig geändert werden.
Resumo:
Since no physical system can ever be completely isolated from its environment, the study of open quantum systems is pivotal to reliably and accurately control complex quantum systems. In practice, reliability of the control field needs to be confirmed via certification of the target evolution while accuracy requires the derivation of high-fidelity control schemes in the presence of decoherence. In the first part of this thesis an algebraic framework is presented that allows to determine the minimal requirements on the unique characterisation of arbitrary unitary gates in open quantum systems, independent on the particular physical implementation of the employed quantum device. To this end, a set of theorems is devised that can be used to assess whether a given set of input states on a quantum channel is sufficient to judge whether a desired unitary gate is realised. This allows to determine the minimal input for such a task, which proves to be, quite remarkably, independent of system size. These results allow to elucidate the fundamental limits regarding certification and tomography of open quantum systems. The combination of these insights with state-of-the-art Monte Carlo process certification techniques permits a significant improvement of the scaling when certifying arbitrary unitary gates. This improvement is not only restricted to quantum information devices where the basic information carrier is the qubit but it also extends to systems where the fundamental informational entities can be of arbitary dimensionality, the so-called qudits. The second part of this thesis concerns the impact of these findings from the point of view of Optimal Control Theory (OCT). OCT for quantum systems utilises concepts from engineering such as feedback and optimisation to engineer constructive and destructive interferences in order to steer a physical process in a desired direction. It turns out that the aforementioned mathematical findings allow to deduce novel optimisation functionals that significantly reduce not only the required memory for numerical control algorithms but also the total CPU time required to obtain a certain fidelity for the optimised process. The thesis concludes by discussing two problems of fundamental interest in quantum information processing from the point of view of optimal control - the preparation of pure states and the implementation of unitary gates in open quantum systems. For both cases specific physical examples are considered: for the former the vibrational cooling of molecules via optical pumping and for the latter a superconducting phase qudit implementation. In particular, it is illustrated how features of the environment can be exploited to reach the desired targets.
Resumo:
Mesh generation is an important step inmany numerical methods.We present the “HierarchicalGraphMeshing” (HGM)method as a novel approach to mesh generation, based on algebraic graph theory.The HGM method can be used to systematically construct configurations exhibiting multiple hierarchies and complex symmetry characteristics. The hierarchical description of structures provided by the HGM method can be exploited to increase the efficiency of multiscale and multigrid methods. In this paper, the HGMmethod is employed for the systematic construction of super carbon nanotubes of arbitrary order, which present a pertinent example of structurally and geometrically complex, yet highly regular, structures. The HGMalgorithm is computationally efficient and exhibits good scaling characteristics. In particular, it scales linearly for super carbon nanotube structures and is working much faster than geometry-based methods employing neighborhood search algorithms. Its modular character makes it conducive to automatization. For the generation of a mesh, the information about the geometry of the structure in a given configuration is added in a way that relates geometric symmetries to structural symmetries. The intrinsically hierarchic description of the resulting mesh greatly reduces the effort of determining mesh hierarchies for multigrid and multiscale applications and helps to exploit symmetry-related methods in the mechanical analysis of complex structures.
Resumo:
Es ist allgemein bekannt, dass sich zwei gegebene Systeme spezieller Funktionen durch Angabe einer Rekursionsgleichung und entsprechend vieler Anfangswerte identifizieren lassen, denn computeralgebraisch betrachtet hat man damit eine Normalform vorliegen. Daher hat sich die interessante Forschungsfrage ergeben, Funktionensysteme zu identifizieren, die über ihre Rodriguesformel gegeben sind. Zieht man den in den 1990er Jahren gefundenen Zeilberger-Algorithmus für holonome Funktionenfamilien hinzu, kann die Rodriguesformel algorithmisch in eine Rekursionsgleichung überführt werden. Falls die Funktionenfamilie überdies hypergeometrisch ist, sogar laufzeiteffizient. Um den Zeilberger-Algorithmus überhaupt anwenden zu können, muss es gelingen, die Rodriguesformel in eine Summe umzuwandeln. Die vorliegende Arbeit beschreibt die Umwandlung einer Rodriguesformel in die genannte Normalform für den kontinuierlichen, den diskreten sowie den q-diskreten Fall vollständig. Das in Almkvist und Zeilberger (1990) angegebene Vorgehen im kontinuierlichen Fall, wo die in der Rodriguesformel auftauchende n-te Ableitung über die Cauchysche Integralformel in ein komplexes Integral überführt wird, zeigt sich im diskreten Fall nun dergestalt, dass die n-te Potenz des Vorwärtsdifferenzenoperators in eine Summenschreibweise überführt wird. Die Rekursionsgleichung aus dieser Summe zu generieren, ist dann mit dem diskreten Zeilberger-Algorithmus einfach. Im q-Fall wird dargestellt, wie Rekursionsgleichungen aus vier verschiedenen q-Rodriguesformeln gewonnen werden können, wobei zunächst die n-te Potenz der jeweiligen q-Operatoren in eine Summe überführt wird. Drei der vier Summenformeln waren bislang unbekannt. Sie wurden experimentell gefunden und per vollständiger Induktion bewiesen. Der q-Zeilberger-Algorithmus erzeugt anschließend aus diesen Summen die gewünschte Rekursionsgleichung. In der Praxis ist es sinnvoll, den schnellen Zeilberger-Algorithmus anzuwenden, der Rekursionsgleichungen für bestimmte Summen über hypergeometrische Terme ausgibt. Auf dieser Fassung des Algorithmus basierend wurden die Überlegungen in Maple realisiert. Es ist daher sinnvoll, dass alle hier aufgeführten Prozeduren, die aus kontinuierlichen, diskreten sowie q-diskreten Rodriguesformeln jeweils Rekursionsgleichungen erzeugen, an den hypergeometrischen Funktionenfamilien der klassischen orthogonalen Polynome, der klassischen diskreten orthogonalen Polynome und an der q-Hahn-Klasse des Askey-Wilson-Schemas vollständig getestet werden. Die Testergebnisse liegen tabellarisch vor. Ein bedeutendes Forschungsergebnis ist, dass mit der im q-Fall implementierten Prozedur zur Erzeugung einer Rekursionsgleichung aus der Rodriguesformel bewiesen werden konnte, dass die im Standardwerk von Koekoek/Lesky/Swarttouw(2010) angegebene Rodriguesformel der Stieltjes-Wigert-Polynome nicht korrekt ist. Die richtige Rodriguesformel wurde experimentell gefunden und mit den bereitgestellten Methoden bewiesen. Hervorzuheben bleibt, dass an Stelle von Rekursionsgleichungen analog Differential- bzw. Differenzengleichungen für die Identifikation erzeugt wurden. Wie gesagt gehört zu einer Normalform für eine holonome Funktionenfamilie die Angabe der Anfangswerte. Für den kontinuierlichen Fall wurden umfangreiche, in dieser Gestalt in der Literatur noch nie aufgeführte Anfangswertberechnungen vorgenommen. Im diskreten Fall musste für die Anfangswertberechnung zur Differenzengleichung der Petkovsek-van-Hoeij-Algorithmus hinzugezogen werden, um die hypergeometrischen Lösungen der resultierenden Rekursionsgleichungen zu bestimmen. Die Arbeit stellt zu Beginn den schnellen Zeilberger-Algorithmus in seiner kontinuierlichen, diskreten und q-diskreten Variante vor, der das Fundament für die weiteren Betrachtungen bildet. Dabei wird gebührend auf die Unterschiede zwischen q-Zeilberger-Algorithmus und diskretem Zeilberger-Algorithmus eingegangen. Bei der praktischen Umsetzung wird Bezug auf die in Maple umgesetzten Zeilberger-Implementationen aus Koepf(1998/2014) genommen. Die meisten der umgesetzten Prozeduren werden im Text dokumentiert. Somit wird ein vollständiges Paket an Algorithmen bereitgestellt, mit denen beispielsweise Formelsammlungen für hypergeometrische Funktionenfamilien überprüft werden können, deren Rodriguesformeln bekannt sind. Gleichzeitig kann in Zukunft für noch nicht erforschte hypergeometrische Funktionenklassen die beschreibende Rekursionsgleichung erzeugt werden, wenn die Rodriguesformel bekannt ist.
Resumo:
In recent years, researchers in artificial intelligence have become interested in replicating human physical reasoning talents in computers. One of the most important skills in this area is predicting how physical systems will behave. This thesis discusses an implemented program that generates algebraic descriptions of how systems of rigid bodies evolve over time. Discussion about the design of this program identifies a physical reasoning paradigm and knowledge representation approach based on mathematical model construction and algebraic reasoning. This paradigm offers several advantages over methods that have become popular in the field, and seems promising for reasoning about a wide variety of classical mechanics problems.
Resumo:
The algebraic-geometric structure of the simplex, known as Aitchison geometry, is used to look at the Dirichlet family of distributions from a new perspective. A classical Dirichlet density function is expressed with respect to the Lebesgue measure on real space. We propose here to change this measure by the Aitchison measure on the simplex, and study some properties and characteristic measures of the resulting density
Resumo:
First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by a simplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able to generate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow defining monitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated
Resumo:
Resumen basado en el de la publicación
Resumo:
Resumen basado en el de la publicación