966 resultados para Implicit finite difference approximation scheme
Resumo:
Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.
Resumo:
Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.
Resumo:
For (H2O)n where n = 1–10, we used a scheme combining molecular dynamics sampling with high level ab initio calculations to locate the global and many low lying local minima for each cluster. For each isomer, we extrapolated the RI-MP2 energies to their complete basis set limit, included a CCSD(T) correction using a smaller basis set and added finite temperature corrections within the rigid-rotor-harmonic-oscillator (RRHO) model using scaled and unscaled harmonic vibrational frequencies. The vibrational scaling factors were determined specifically for water clusters by comparing harmonic frequencies with VPT2 fundamental frequencies. We find the CCSD(T) correction to the RI-MP2 binding energy to be small (<1%) but still important in determining accurate conformational energies. Anharmonic corrections are found to be non-negligble; they do not alter the energetic ordering of isomers, but they do lower the free energies of formation of the water clusters by as much as 4 kcal/mol at 298.15 K.
Resumo:
When particle flux is regulated by multiple factors such as particle supply and varying transport rate, it is important to identify the respective dominant regimes. We extend the well-studied totally asymmetric simple exclusion model to investigate the interplay between a controlled entrance and a local defect site. The model mimics cellular transport phenomena where there is typically a finite particle pool and nonuniform moving rates due to biochemical kinetics. Our simulations reveal regions where, despite an increasing particle supply, the current remains constant while particles redistribute in the system. Exploiting a domain wall approach with mean-field approximation, we provide a theoretical ground for our findings. The results in steady-state current and density profiles provide quantitative insights into the regulation of the transcription and translation process in bacterial protein synthesis.
Resumo:
The analysis of Komendant's design of the Kimbell Art Museum was carried out in order to determine the effectiveness of the ring beams, edge beams and prestressing in the shells of the roof system. Finite element analysis was not available to Komendant or other engineers of the time to aid them in the design and analysis. Thus, the use of this tool helped to form a new perspective on the Kimbell Art Museum and analyze the engineer's work. In order to carry out the finite element analysis of Kimbell Art Museum, ADINA finite element analysis software was utilized. Eight finite element models (FEM-1 through FEM-8) of increasing complexity were created. The results of the most realistic model, FEM-8, which included ring beams, edge beams and prestressing, were compared to Komendant's calculations. The maximum deflection at the crown of the mid-span surface of -0.1739 in. in FEM-8 was found to be larger than Komendant's deflection in the design documents before the loss in prestressing force (-0.152 in.) but smaller than his prediction after the loss in prestressing force (-0.3814 in.). Komendant predicted a larger longitudinal stress of -903 psi at the crown (vs. -797 psi in FEM-8) and 37 psi at the edge (vs. -347 psi in FEM-8). Considering the strength of concrete of 5000 psi, the difference in results is not significant. From the analysis it was determined that both FEM-5, which included prestressing and fixed rings, and FEM-8 can be successfully and effectively implemented in practice. Prestressing was used in both models and thus served as the main contribution to efficiency. FEM-5 showed that ring and edge beams can be avoided, however an architect might find them more aesthetically appropriate than rigid walls.
Resumo:
Negative biases in implicit self-evaluation are thought to be detrimental to subjective well-being and have been linked to various psychological disorders, including depression. An understanding of the neural processes underlying implicit self-evaluation in healthy subjects could provide a basis for the investigation of negative biases in depressed patients, the development of differential psychotherapeutic interventions, and the estimation of relapse risk in remitted patients. We thus studied the brain processes linked to implicit self-evaluation in 25 healthy subjects using event-related potential (ERP) recording during a self-relevant Implicit Association Test (sIAT). Consistent with a positive implicit self-evaluation in healthy subjects, they responded significantly faster to the congruent (self-positive mapping) than to the incongruent sIAT condition (self-negative mapping). Our main finding was a topographical ERP difference in a time window between 600 and 700 ms, whereas no significant differences between congruent and incongruent conditions were observed in earlier time windows. This suggests that biases in implicit self-evaluation are reflected only indirectly, in the additional recruitment of control processes needed to override the positive implicit self-evaluation of healthy subjects in the incongruent sIAT condition. Brain activations linked to these control processes can thus serve as an indirect measure for estimating biases in implicit self-evaluation. The sIAT paradigm, combined with ERP, could therefore permit the tracking of the neural processes underlying implicit self-evaluation in depressed patients during psychotherapy.
Resumo:
An extrusion die is used to continuously produce parts with a constant cross section; such as sheets, pipes, tire components and more complex shapes such as window seals. The die is fed by a screw extruder when polymers are used. The extruder melts, mixes and pressures the material by the rotation of either a single or double screw. The polymer can then be continuously forced through the die producing a long part in the shape of the die outlet. The extruded section is then cut to the desired length. Generally, the primary target of a well designed die is to produce a uniform outlet velocity without excessively raising the pressure required to extrude the polymer through the die. Other properties such as temperature uniformity and residence time are also important but are not directly considered in this work. Designing dies for optimal outlet velocity variation using simple analytical equations are feasible for basic die geometries or simple channels. Due to the complexity of die geometry and of polymer material properties design of complex dies by analytical methods is difficult. For complex dies iterative methods must be used to optimize dies. An automated iterative method is desired for die optimization. To automate the design and optimization of an extrusion die two issues must be dealt with. The first is how to generate a new mesh for each iteration. In this work, this is approached by modifying a Parasolid file that describes a CAD part. This file is then used in a commercial meshing software. Skewing the initial mesh to produce a new geometry was also employed as a second option. The second issue is an optimization problem with the presence of noise stemming from variations in the mesh and cumulative truncation errors. In this work a simplex method and a modified trust region method were employed for automated optimization of die geometries. For the trust region a discreet derivative and a BFGS Hessian approximation were used. To deal with the noise in the function the trust region method was modified to automatically adjust the discreet derivative step size and the trust region based on changes in noise and function contour. Generally uniformity of velocity at exit of the extrusion die can be improved by increasing resistance across the die but this is limited by the pressure capabilities of the extruder. In optimization, a penalty factor that increases exponentially from the pressure limit is applied. This penalty can be applied in two different ways; the first only to the designs which exceed the pressure limit, the second to both designs above and below the pressure limit. Both of these methods were tested and compared in this work.
Resumo:
This dissertation concerns the intersection of three areas of discrete mathematics: finite geometries, design theory, and coding theory. The central theme is the power of finite geometry designs, which are constructed from the points and t-dimensional subspaces of a projective or affine geometry. We use these designs to construct and analyze combinatorial objects which inherit their best properties from these geometric structures. A central question in the study of finite geometry designs is Hamada’s conjecture, which proposes that finite geometry designs are the unique designs with minimum p-rank among all designs with the same parameters. In this dissertation, we will examine several questions related to Hamada’s conjecture, including the existence of counterexamples. We will also study the applicability of certain decoding methods to known counterexamples. We begin by constructing an infinite family of counterexamples to Hamada’s conjecture. These designs are the first infinite class of counterexamples for the affine case of Hamada’s conjecture. We further demonstrate how these designs, along with the projective polarity designs of Jungnickel and Tonchev, admit majority-logic decoding schemes. The codes obtained from these polarity designs attain error-correcting performance which is, in certain cases, equal to that of the finite geometry designs from which they are derived. This further demonstrates the highly geometric structure maintained by these designs. Finite geometries also help us construct several types of quantum error-correcting codes. We use relatives of finite geometry designs to construct infinite families of q-ary quantum stabilizer codes. We also construct entanglement-assisted quantum error-correcting codes (EAQECCs) which admit a particularly efficient and effective error-correcting scheme, while also providing the first general method for constructing these quantum codes with known parameters and desirable properties. Finite geometry designs are used to give exceptional examples of these codes.
Resumo:
We consider the problem of approximating the 3D scan of a real object through an affine combination of examples. Common approaches depend either on the explicit estimation of point-to-point correspondences or on 2-dimensional projections of the target mesh; both present drawbacks. We follow an approach similar to [IF03] by representing the target via an implicit function, whose values at the vertices of the approximation are used to define a robust cost function. The problem is approached in two steps, by approximating first a coarse implicit representation of the whole target, and then finer, local ones; the local approximations are then merged together with a Poisson-based method. We report the results of applying our method on a subset of 3D scans from the Face Recognition Grand Challenge v.1.0.
Resumo:
The Implicit Association Test (IAT) had already gained the status of a prominent assessment procedure before its psychometric properties and underlying task structure were understood. The present critique addresses five major problems that arise when the IAT is used for diagnostic inferences: (1) the asymmetry of causal and diagnostic inferences; (2) the viability of the underlying association model; (3) the lack of a testable model underlying IAT-based inferences; (4) the difficulties of interpreting difference scores; and (5) the susceptibility of the IAT to deliberate faking and strategic processing. Based on a theoretical reflection of these issues, and a comprehensive survey of published IAT studies, it is concluded that a number of uncontrolled factors can produce (or reduce) significant IAT scores independently of the personality attribute that is supposed to be captured by the IAT procedure.
Resumo:
When considering NLO corrections to thermal particle production in the “relativistic” regime, in which the invariant mass squared of the produced particle is K2 ~ (πT)2, then the production rate can be expressed as a sum of a few universal “master” spectral functions. Taking the most complicated 2-loop master as an example, a general strategy for obtaining a convergent 2-dimensional integral representation is suggested. The analysis applies both to bosonic and fermionic statistics, and shows that for this master the non-relativistic approximation is only accurate for K2 ~(8πT)2, whereas the zero-momentum approximation works surprisingly well. Once the simpler masters have been similarly resolved, NLO results for quantities such as the right-handed neutrino production rate from a Standard Model plasma or the dilepton production rate from a QCD plasma can be assembled for K2 ~ (πT)2.
Resumo:
Stable oxygen isotope composition of atmospheric precipitation (δ18Op) was scrutinized from 39 stations distributed over Switzerland and its border zone. Monthly amount-weighted δ18Op values averaged over the 1995–2000 period showed the expected strong linear altitude dependence (−0.15 to −0.22‰ per 100 m) only during the summer season (May–September). Steeper gradients (~ −0.56 to −0.60‰ per 100 m) were observed for winter months over a low elevation belt, while hardly any altitudinal difference was seen for high elevation stations. This dichotomous pattern could be explained by the characteristically shallower vertical atmospheric mixing height during winter season and provides empirical evidence for recently simulated effects of stratified atmospheric flow on orographic precipitation isotopic ratios. This helps explain "anomalous" deflected altitudinal water isotope profiles reported from many other high relief regions. Grids and isotope distribution maps of the monthly δ18Op have been calculated over the study region for 1995–1996. The adopted interpolation method took into account both the variable mixing heights and the seasonal difference in the isotopic lapse rate and combined them with residual kriging. The presented data set allows a point estimation of δ18Op with monthly resolution. According to the test calculations executed on subsets, this biannual data set can be extended back to 1992 with maintained fidelity and, with a reduced station subset, even back to 1983 at the expense of faded reliability of the derived δ18Op estimates, mainly in the eastern part of Switzerland. Before 1983, reliable results can only be expected for the Swiss Plateau since important stations representing eastern and south-western Switzerland were not yet in operation.
Resumo:
A large deviations type approximation to the probability of ruin within a finite time for the compound Poisson risk process perturbed by diffusion is derived. This approximation is based on the saddlepoint method and generalizes the approximation for the non-perturbed risk process by Barndorff-Nielsen and Schmidli (Scand Actuar J 1995(2):169–186, 1995). An importance sampling approximation to this probability of ruin is also provided. Numerical illustrations assess the accuracy of the saddlepoint approximation using importance sampling as a benchmark. The relative deviations between saddlepoint approximation and importance sampling are very small, even for extremely small probabilities of ruin. The saddlepoint approximation is however substantially faster to compute.
Resumo:
This paper presents some ideas about a new neural network architecture that can be compared to a Taylor analysis when dealing with patterns. Such architecture is based on lineal activation functions with an axo-axonic architecture. A biological axo-axonic connection between two neurons is defined as the weight in a connection in given by the output of another third neuron. This idea can be implemented in the so called Enhanced Neural Networks in which two Multilayer Perceptrons are used; the first one will output the weights that the second MLP uses to computed the desired output. This kind of neural network has universal approximation properties even with lineal activation functions. There exists a clear difference between cooperative and competitive strategies. The former ones are based on the swarm colonies, in which all individuals share its knowledge about the goal in order to pass such information to other individuals to get optimum solution. The latter ones are based on genetic models, that is, individuals can die and new individuals are created combining information of alive one; or are based on molecular/celular behaviour passing information from one structure to another. A swarm-based model is applied to obtain the Neural Network, training the net with a Particle Swarm algorithm.
Resumo:
We introduce a second order in time modified Lagrange--Galerkin (MLG) method for the time dependent incompressible Navier--Stokes equations. The main ingredient of the new method is the scheme proposed to calculate in a more efficient manner the Galerkin projection of the functions transported along the characteristic curves of the transport operator. We present error estimates for velocity and pressure in the framework of mixed finite elements when either the mini-element or the $P2/P1$ Taylor--Hood element are used.