21 resultados para Numerical algorithms
em Universitätsbibliothek Kassel, Universität Kassel, Germany
Resumo:
Hybrid simulation is a technique that combines experimental and numerical testing and has been used for the last decades in the fields of aerospace, civil and mechanical engineering. During this time, most of the research has focused on developing algorithms and the necessary technology, including but not limited to, error minimisation techniques, phase lag compensation and faster hydraulic cylinders. However, one of the main shortcomings in hybrid simulation that has pre- vented its widespread use is the size of the numerical models and the effect that higher frequencies may have on the stability and accuracy of the simulation. The first chapter in this document provides an overview of the hybrid simulation method and the different hybrid simulation schemes, and the corresponding time integration algorithms, that are more commonly used in this field. The scope of this thesis is presented in more detail in chapter 2: a substructure algorithm, the Substep Force Feedback (Subfeed), is adapted in order to fulfil the necessary requirements in terms of speed. The effects of more complex models on the Subfeed are also studied in detail, and the improvements made are validated experimentally. Chapters 3 and 4 detail the methodologies that have been used in order to accomplish the objectives mentioned in the previous lines, listing the different cases of study and detailing the hardware and software used to experimentally validate them. The third chapter contains a brief introduction to a project, the DFG Subshake, whose data have been used as a starting point for the developments that are shown later in this thesis. The results obtained are presented in chapters 5 and 6, with the first of them focusing on purely numerical simulations while the second of them is more oriented towards a more practical application including experimental real-time hybrid simulation tests with large numerical models. Following the discussion of the developments in this thesis is a list of hardware and software requirements that have to be met in order to apply the methods described in this document, and they can be found in chapter 7. The last chapter, chapter 8, of this thesis focuses on conclusions and achievements extracted from the results, namely: the adaptation of the hybrid simulation algorithm Subfeed to be used in conjunction with large numerical models, the study of the effect of high frequencies on the substructure algorithm and experimental real-time hybrid simulation tests with vibrating subsystems using large numerical models and shake tables. A brief discussion of possible future research activities can be found in the concluding chapter.
Resumo:
This article is concerned with the numerical simulation of flows at low Mach numbers which are subject to the gravitational force and strong heat sources. As a specific example for such flows, a fire event in a car tunnel will be considered in detail. The low Mach flow is treated with a preconditioning technique allowing the computation of unsteady flows, while the source terms for gravitation and heat are incorporated via operator splitting. It is shown that a first order discretization in space is not able to compute the buoyancy forces properly on reasonable grids. The feasibility of the method is demonstrated on several test cases.
Resumo:
We develop several algorithms for computations in Galois extensions of p-adic fields. Our algorithms are based on existing algorithms for number fields and are exact in the sense that we do not need to consider approximations to p-adic numbers. As an application we describe an algorithmic approach to prove or disprove various conjectures for local and global epsilon constants.
Resumo:
Data mining means to summarize information from large amounts of raw data. It is one of the key technologies in many areas of economy, science, administration and the internet. In this report we introduce an approach for utilizing evolutionary algorithms to breed fuzzy classifier systems. This approach was exercised as part of a structured procedure by the students Achler, Göb and Voigtmann as contribution to the 2006 Data-Mining-Cup contest, yielding encouragingly positive results.
Resumo:
The process of developing software that takes advantage of multiple processors is commonly referred to as parallel programming. For various reasons, this process is much harder than the sequential case. For decades, parallel programming has been a problem for a small niche only: engineers working on parallelizing mostly numerical applications in High Performance Computing. This has changed with the advent of multi-core processors in mainstream computer architectures. Parallel programming in our days becomes a problem for a much larger group of developers. The main objective of this thesis was to find ways to make parallel programming easier for them. Different aims were identified in order to reach the objective: research the state of the art of parallel programming today, improve the education of software developers about the topic, and provide programmers with powerful abstractions to make their work easier. To reach these aims, several key steps were taken. To start with, a survey was conducted among parallel programmers to find out about the state of the art. More than 250 people participated, yielding results about the parallel programming systems and languages in use, as well as about common problems with these systems. Furthermore, a study was conducted in university classes on parallel programming. It resulted in a list of frequently made mistakes that were analyzed and used to create a programmers' checklist to avoid them in the future. For programmers' education, an online resource was setup to collect experiences and knowledge in the field of parallel programming - called the Parawiki. Another key step in this direction was the creation of the Thinking Parallel weblog, where more than 50.000 readers to date have read essays on the topic. For the third aim (powerful abstractions), it was decided to concentrate on one parallel programming system: OpenMP. Its ease of use and high level of abstraction were the most important reasons for this decision. Two different research directions were pursued. The first one resulted in a parallel library called AthenaMP. It contains so-called generic components, derived from design patterns for parallel programming. These include functionality to enhance the locks provided by OpenMP, to perform operations on large amounts of data (data-parallel programming), and to enable the implementation of irregular algorithms using task pools. AthenaMP itself serves a triple role: the components are well-documented and can be used directly in programs, it enables developers to study the source code and learn from it, and it is possible for compiler writers to use it as a testing ground for their OpenMP compilers. The second research direction was targeted at changing the OpenMP specification to make the system more powerful. The main contributions here were a proposal to enable thread-cancellation and a proposal to avoid busy waiting. Both were implemented in a research compiler, shown to be useful in example applications, and proposed to the OpenMP Language Committee.
Resumo:
This work is concerned with finite volume methods for flows at low mach numbers which are under buoyancy and heat sources. As a particular application, fires in car tunnels will be considered. To extend the scheme for compressible flow into the low Mach number regime, a preconditioning technique is used and a stability result on this is proven. The source terms for gravity and heat are incorporated using operator splitting and the resulting method is analyzed.
Resumo:
We consider a first order implicit time stepping procedure (Euler scheme) for the non-stationary Stokes equations in smoothly bounded domains of R3. Using energy estimates we can prove optimal convergence properties in the Sobolev spaces Hm(G) (m = 0;1;2) uniformly in time, provided that the solution of the Stokes equations has a certain degree of regularity. For the solution of the resulting Stokes resolvent boundary value problems we use a representation in form of hydrodynamical volume and boundary layer potentials, where the unknown source densities of the latter can be determined from uniquely solvable boundary integral equations’ systems. For the numerical computation of the potentials and the solution of the boundary integral equations a boundary element method of collocation type is used. Some simulations of a model problem are carried out and illustrate the efficiency of the method.
Resumo:
A fully relativistic four-component Dirac-Fock-Slater program for diatomics, with numerically given AO's as basis functions is presented. We discuss the problem of the errors due to the finite basis-set, and due to the influence of the negative energy solutions of the Dirac Hamiltonian. The negative continuum contributions are found to be very small.
Resumo:
In this report, we discuss the application of global optimization and Evolutionary Computation to distributed systems. We therefore selected and classified many publications, giving an insight into the wide variety of optimization problems which arise in distributed systems. Some interesting approaches from different areas will be discussed in greater detail with the use of illustrative examples.
Resumo:
While most data analysis and decision support tools use numerical aspects of the data, Conceptual Information Systems focus on their conceptual structure. This paper discusses how both approaches can be combined.
Resumo:
Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.
Resumo:
Sei $N/K$ eine galoissche Zahlkörpererweiterung mit Galoisgruppe $G$, so dass es in $N$ eine Stelle mit voller Zerlegungsgruppe gibt. Die vorliegende Arbeit beschäftigt sich mit Algorithmen, die für das gegebene Fallbeispiel $N/K$, die äquivariante Tamagawazahlvermutung von Burns und Flach für das Paar $(h^0(Spec(N), \mathbb{Z}[G]))$ (numerisch) verifizieren. Grob gesprochen stellt die äquivariante Tamagawazahlvermutung (im Folgenden ETNC) in diesem Spezialfall einen Zusammenhang her zwischen Werten von Artinschen $L$-Reihen zu den absolut irreduziblen Charakteren von $G$ und einer Eulercharakteristik, die man in diesem Fall mit Hilfe einer sogenannten Tatesequenz konstruieren kann. Unter den Voraussetzungen 1. es gibt eine Stelle $v$ von $N$ mit voller Zerlegungsgruppe, 2. jeder irreduzible Charakter $\chi$ von $G$ erfüllt eine der folgenden Bedingungen 2a) $\chi$ ist abelsch, 2b) $\chi(G) \subset \mathbb{Q}$ und $\chi$ ist eine ganzzahlige Linearkombination von induzierten trivialen Charakteren; wird ein Algorithmus entwickelt, der ETNC für jedes Fallbeispiel $N/\mathbb{Q}$ vollständig beweist. Voraussetzung 1. erlaubt es eine Idee von Chinburg ([Chi89]) umzusetzen zur algorithmischen Berechnung von Tatesequenzen. Dabei war es u.a. auch notwendig lokale Fundamentalklassen zu berechnen. Im höchsten zahm verzweigten Fall haben wir hierfür einen Algorithmus entwickelt, der ebenfalls auf den Ideen von Chinburg ([Chi85]) beruht, die auf Arbeiten von Serre [Ser] zurück gehen. Für nicht zahm verzweigte Erweiterungen benutzen wir den von Debeerst ([Deb11]) entwickelten Algorithmus, der ebenfalls auf Serre's Arbeiten beruht. Voraussetzung 2. wird benötigt, um Quotienten aus den $L$-Werten und Regulatoren exakt zu berechnen. Dies gelingt, da wir im Fall von abelschen Charakteren auf die Theorie der zyklotomischen Einheiten zurückgreifen können und im Fall (b) auf die analytische Klassenzahlformel von Zwischenkörpern. Ohne die Voraussetzung 2. liefern die Algorithmen für jedes Fallbeispiel $N/K$ immer noch eine numerische Verifikation bis auf Rechengenauigkeit. Den Algorithmus zur numerischen Verifikation haben wir für $A_4$-Erweiterungen über $\mathbb{Q}$ in das Computeralgebrasystem MAGMA implementiert und für 27 Erweiterungen die äquivariante Tamagawazahlvermutung numerisch verifiziert.