906 resultados para numerical computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sei $N/K$ eine galoissche Zahlkörpererweiterung mit Galoisgruppe $G$, so dass es in $N$ eine Stelle mit voller Zerlegungsgruppe gibt. Die vorliegende Arbeit beschäftigt sich mit Algorithmen, die für das gegebene Fallbeispiel $N/K$, die äquivariante Tamagawazahlvermutung von Burns und Flach für das Paar $(h^0(Spec(N), \mathbb{Z}[G]))$ (numerisch) verifizieren. Grob gesprochen stellt die äquivariante Tamagawazahlvermutung (im Folgenden ETNC) in diesem Spezialfall einen Zusammenhang her zwischen Werten von Artinschen $L$-Reihen zu den absolut irreduziblen Charakteren von $G$ und einer Eulercharakteristik, die man in diesem Fall mit Hilfe einer sogenannten Tatesequenz konstruieren kann. Unter den Voraussetzungen 1. es gibt eine Stelle $v$ von $N$ mit voller Zerlegungsgruppe, 2. jeder irreduzible Charakter $\chi$ von $G$ erfüllt eine der folgenden Bedingungen 2a) $\chi$ ist abelsch, 2b) $\chi(G) \subset \mathbb{Q}$ und $\chi$ ist eine ganzzahlige Linearkombination von induzierten trivialen Charakteren; wird ein Algorithmus entwickelt, der ETNC für jedes Fallbeispiel $N/\mathbb{Q}$ vollständig beweist. Voraussetzung 1. erlaubt es eine Idee von Chinburg ([Chi89]) umzusetzen zur algorithmischen Berechnung von Tatesequenzen. Dabei war es u.a. auch notwendig lokale Fundamentalklassen zu berechnen. Im höchsten zahm verzweigten Fall haben wir hierfür einen Algorithmus entwickelt, der ebenfalls auf den Ideen von Chinburg ([Chi85]) beruht, die auf Arbeiten von Serre [Ser] zurück gehen. Für nicht zahm verzweigte Erweiterungen benutzen wir den von Debeerst ([Deb11]) entwickelten Algorithmus, der ebenfalls auf Serre's Arbeiten beruht. Voraussetzung 2. wird benötigt, um Quotienten aus den $L$-Werten und Regulatoren exakt zu berechnen. Dies gelingt, da wir im Fall von abelschen Charakteren auf die Theorie der zyklotomischen Einheiten zurückgreifen können und im Fall (b) auf die analytische Klassenzahlformel von Zwischenkörpern. Ohne die Voraussetzung 2. liefern die Algorithmen für jedes Fallbeispiel $N/K$ immer noch eine numerische Verifikation bis auf Rechengenauigkeit. Den Algorithmus zur numerischen Verifikation haben wir für $A_4$-Erweiterungen über $\mathbb{Q}$ in das Computeralgebrasystem MAGMA implementiert und für 27 Erweiterungen die äquivariante Tamagawazahlvermutung numerisch verifiziert.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider numerical methods for the compressible time dependent Navier-Stokes equations, discussing the spatial discretization by Finite Volume and Discontinuous Galerkin methods, the time integration by time adaptive implicit Runge-Kutta and Rosenbrock methods and the solution of the appearing nonlinear and linear equations systems by preconditioned Jacobian-Free Newton-Krylov, as well as Multigrid methods. As applications, thermal Fluid structure interaction and other unsteady flow problems are considered. The text is aimed at both mathematicians and engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwater–freshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We are currently at the cusp of a revolution in quantum technology that relies not just on the passive use of quantum effects, but on their active control. At the forefront of this revolution is the implementation of a quantum computer. Encoding information in quantum states as “qubits” allows to use entanglement and quantum superposition to perform calculations that are infeasible on classical computers. The fundamental challenge in the realization of quantum computers is to avoid decoherence – the loss of quantum properties – due to unwanted interaction with the environment. This thesis addresses the problem of implementing entangling two-qubit quantum gates that are robust with respect to both decoherence and classical noise. It covers three aspects: the use of efficient numerical tools for the simulation and optimal control of open and closed quantum systems, the role of advanced optimization functionals in facilitating robustness, and the application of these techniques to two of the leading implementations of quantum computation, trapped atoms and superconducting circuits. After a review of the theoretical and numerical foundations, the central part of the thesis starts with the idea of using ensemble optimization to achieve robustness with respect to both classical fluctuations in the system parameters, and decoherence. For the example of a controlled phasegate implemented with trapped Rydberg atoms, this approach is demonstrated to yield a gate that is at least one order of magnitude more robust than the best known analytic scheme. Moreover this robustness is maintained even for gate durations significantly shorter than those obtained in the analytic scheme. Superconducting circuits are a particularly promising architecture for the implementation of a quantum computer. Their flexibility is demonstrated by performing optimizations for both diagonal and non-diagonal quantum gates. In order to achieve robustness with respect to decoherence, it is essential to implement quantum gates in the shortest possible amount of time. This may be facilitated by using an optimization functional that targets an arbitrary perfect entangler, based on a geometric theory of two-qubit gates. For the example of superconducting qubits, it is shown that this approach leads to significantly shorter gate durations, higher fidelities, and faster convergence than the optimization towards specific two-qubit gates. Performing optimization in Liouville space in order to properly take into account decoherence poses significant numerical challenges, as the dimension scales quadratically compared to Hilbert space. However, it can be shown that for a unitary target, the optimization only requires propagation of at most three states, instead of a full basis of Liouville space. Both for the example of trapped Rydberg atoms, and for superconducting qubits, the successful optimization of quantum gates is demonstrated, at a significantly reduced numerical cost than was previously thought possible. Together, the results of this thesis point towards a comprehensive framework for the optimization of robust quantum gates, paving the way for the future realization of quantum computers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A foundational model of concurrency is developed in this thesis. We examine issues in the design of parallel systems and show why the actor model is suitable for exploiting large-scale parallelism. Concurrency in actors is constrained only by the availability of hardware resources and by the logical dependence inherent in the computation. Unlike dataflow and functional programming, however, actors are dynamically reconfigurable and can model shared resources with changing local state. Concurrency is spawned in actors using asynchronous message-passing, pipelining, and the dynamic creation of actors. This thesis deals with some central issues in distributed computing. Specifically, problems of divergence and deadlock are addressed. For example, actors permit dynamic deadlock detection and removal. The problem of divergence is contained because independent transactions can execute concurrently and potentially infinite processes are nevertheless available for interaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis takes an interdisciplinary approach to the study of color vision, focussing on the phenomenon of color constancy formulated as a computational problem. The primary contributions of the thesis are (1) the demonstration of a formal framework for lightness algorithms; (2) the derivation of a new lightness algorithm based on regularization theory; (3) the synthesis of an adaptive lightness algorithm using "learning" techniques; (4) the development of an image segmentation algorithm that uses luminance and color information to mark material boundaries; and (5) an experimental investigation into the cues that human observers use to judge the color of the illuminant. Other computational approaches to color are reviewed and some of their links to psychophysics and physiology are explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

KAM is a computer program that can automatically plan, monitor, and interpret numerical experiments with Hamiltonian systems with two degrees of freedom. The program has recently helped solve an open problem in hydrodynamics. Unlike other approaches to qualitative reasoning about physical system dynamics, KAM embodies a significant amount of knowledge about nonlinear dynamics. KAM's ability to control numerical experiments arises from the fact that it not only produces pictures for us to see, but also looks at (sic---in its mind's eye) the pictures it draws to guide its own actions. KAM is organized in three semantic levels: orbit recognition, phase space searching, and parameter space searching. Within each level spatial properties and relationships that are not explicitly represented in the initial representation are extracted by applying three operations ---(1) aggregation, (2) partition, and (3) classification--- iteratively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dataflow model of computation exposes and exploits parallelism in programs without requiring programmer annotation; however, instruction- level dataflow is too fine-grained to be efficient on general-purpose processors. A popular solution is to develop a "hybrid'' model of computation where regions of dataflow graphs are combined into sequential blocks of code. I have implemented such a system to allow the J-Machine to run Id programs, leaving exposed a high amount of parallelism --- such as among loop iterations. I describe this system and provide an analysis of its strengths and weaknesses and those of the J-Machine, along with ideas for improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Kineticist's Workbench is a program that simulates chemical reaction mechanisms by predicting, generating, and interpreting numerical data. Prior to simulation, it analyzes a given mechanism to predict that mechanism's behavior; it then simulates the mechanism numerically; and afterward, it interprets and summarizes the data it has generated. In performing these tasks, the Workbench uses a variety of techniques: graph- theoretic algorithms (for analyzing mechanisms), traditional numerical simulation methods, and algorithms that examine simulation results and reinterpret them in qualitative terms. The Workbench thus serves as a prototype for a new class of scientific computational tools---tools that provide symbiotic collaborations between qualitative and quantitative methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The discontinuities in the solutions of systems of conservation laws are widely considered as one of the difficulties in numerical simulation. A numerical method is proposed for solving these partial differential equations with discontinuities in the solution. The method is able to track these sharp discontinuities or interfaces while still fully maintain the conservation property. The motion of the front is obtained by solving a Riemann problem based on the state values at its both sides which are reconstructed by using weighted essentially non oscillatory (WENO) scheme. The propagation of the front is coupled with the evaluation of "dynamic" numerical fluxes. Some numerical tests in 1D and preliminary results in 2D are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electroosmotic flow is a convenient mechanism for transporting polar fluid in a microfluidic device. The flow is generated through the application of an external electric field that acts on the free charges that exists in a thin Debye layer at the channel walls. The charge on the wall is due to the chemistry of the solid-fluid interface, and it can vary along the channel, e.g. due to modification of the wall. This investigation focuses on the simulation of the electroosmotic flow (EOF) profile in a cylindrical microchannel with step change in zeta potential. The modified Navier-Stoke equation governing the velocity field and a non-linear two-dimensional Poisson-Boltzmann equation governing the electrical double-layer (EDL) field distribution are solved numerically using finite control-volume method. Continuities of flow rate and electric current are enforced resulting in a non-uniform electrical field and pressure gradient distribution along the channel. The resulting parabolic velocity distribution at the junction of the step change in zeta potential, which is more typical of a pressure-driven velocity flow profile, is obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations