966 resultados para Implicit finite difference approximation scheme
Resumo:
The nucleon spectral function in nuclear matter fulfills an energy weighted sum rule. Comparing two different realistic potentials, these sum rules are studied for Greens functions that are derived self-consistently within the T matrix approximation at finite temperature.
Resumo:
Bulk and single-particle properties of hot hyperonic matter are studied within the Brueckner-Hartree-Fock approximation extended to finite temperature. The bare interaction in the nucleon sector is the Argonne V18 potential supplemented with an effective three-body force to reproduce the saturating properties of nuclear matter. The modern Nijmegen NSC97e potential is employed for the hyperon-nucleon and hyperon-hyperon interactions. The effect of temperature on the in-medium effective interaction is found to be, in general, very small and the single-particle potentials differ by at most 25% for temperatures in the range from 0 to 60 MeV. The bulk properties of infinite matter of baryons, either nuclear isospin symmetric or a Beta-stable composition that includes a nonzero fraction of hyperons, are obtained. It is found that the presence of hyperons can modify the thermodynamical properties of the system in a non-negligible way.
Resumo:
In [4], Guillard and Viozat propose a finite volume method for the simulation of inviscid steady as well as unsteady flows at low Mach numbers, based on a preconditioning technique. The scheme satisfies the results of a single scale asymptotic analysis in a discrete sense and comprises the advantage that this can be derived by a slight modification of the dissipation term within the numerical flux function. Unfortunately, it can be observed by numerical experiments that the preconditioned approach combined with an explicit time integration scheme turns out to be unstable if the time step Dt does not satisfy the requirement to be O(M2) as the Mach number M tends to zero, whereas the corresponding standard method remains stable up to Dt=O(M), M to 0, which results from the well-known CFL-condition. We present a comprehensive mathematical substantiation of this numerical phenomenon by means of a von Neumann stability analysis, which reveals that in contrast to the standard approach, the dissipation matrix of the preconditioned numerical flux function possesses an eigenvalue growing like M-2 as M tends to zero, thus causing the diminishment of the stability region of the explicit scheme. Thereby, we present statements for both the standard preconditioner used by Guillard and Viozat [4] and the more general one due to Turkel [21]. The theoretical results are after wards confirmed by numerical experiments.
Resumo:
Der Vielelektronen Aspekt wird in einteilchenartigen Formulierungen berücksichtigt, entweder in Hartree-Fock Näherung oder unter dem Einschluß der Elektron-Elektron Korrelationen durch die Dichtefunktional Theorie. Da die Physik elektronischer Systeme (Atome, Moleküle, Cluster, Kondensierte Materie, Plasmen) relativistisch ist, habe ich von Anfang an die relativistische 4 Spinor Dirac Theorie eingesetzt, in jüngster Zeit aber, und das wird der hauptfortschritt in den relativistischen Beschreibung durch meine Promotionsarbeit werden, eine ebenfalls voll relativistische, auf dem sogenannten Minimax Prinzip beruhende 2-Spinor Theorie umgesetzt. Im folgenden ist eine kurze Beschreibung meiner Dissertation: Ein wesentlicher Effizienzgewinn in der relativistischen 4-Spinor Dirac Rechnungen konnte durch neuartige singuläre Koordinatentransformationen erreicht werden, so daß sich auch noch für das superschwere Th2 179+ hächste Lösungsgenauigkeiten mit moderatem Computer Aufwand ergaben, und zu zwei weiteren interessanten Veröffentlichungen führten (Publikationsliste). Trotz der damit bereits ermöglichten sehr viel effizienteren relativistischen Berechnung von Molekülen und Clustern blieben diese Rechnungen Größenordnungen aufwendiger als entsprechende nicht-relativistische. Diese behandeln das tatsächliche (relativitische) Verhalten elektronischer Systeme nur näherungsweise richtig, um so besser jedoch, je leichter die beteiligten Atome sind (kleine Kernladungszahl Z). Deshalb habe ich nach einem neuen Formalismus gesucht, der dem möglichst gut Rechnung trägt und trotzdem die Physik richtig relativistisch beschreibt. Dies gelingt durch ein 2-Spinor basierendes Minimax Prinzip: Systeme mit leichten Atomen sind voll relativistisch nunmehr nahezu ähnlich effizient beschrieben wie nicht-relativistisch, was natürlich große Hoffnungen für genaue (d.h. relativistische) Berechnungen weckt. Es ergab sich eine erste grundlegende Veröffentlichung (Publikationsliste). Die Genauigkeit in stark relativistischen Systemen wie Th2 179+ ist ähnlich oder leicht besser als in 4-Spinor Dirac-Formulierung. Die Vorteile der neuen Formulierung gehen aber entscheidend weiter: A. Die neue Minimax Formulierung der Dirac-Gl. ist frei von spuriosen Zuständen und hat keine positronischen Kontaminationen. B. Der Aufwand ist weit reduziert, da nur ein 1/3 der Matrix Elemente gegenüber 4-Spinor noch zu berechnen ist, und alle Matrixdimensionen Faktor 2 kleiner sind. C. Numerisch verhält sich die neue Formulierung ähnlilch gut wie die nichtrelativistische Schrödinger Gleichung (Obwohl es eine exakte Formulierung und keine Näherung der Dirac-Gl. ist), und hat damit bessere Konvergenzeigenschaften als 4-Spinor. Insbesondere die Fehlerwichtung (singulärer und glatter Anteil) ist in 2-Spinor anders, und diese zeigt die guten Extrapolationseigenschaften wie bei der nichtrelativistischen Schrödinger Gleichung. Die Ausweitung des Anwendungsbereichs von (relativistischen) 2-Spinor ist bereits in FEM Dirac-Fock-Slater, mit zwei Beispielen CO und N2, erfolgreich gemacht. Weitere Erweiterungen sind nahezu möglich. Siehe Minmax LCAO Nährung.
Resumo:
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.
Resumo:
abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.
Resumo:
In the static field limit, the vibrational hyperpolarizability consists of two contributions due to: (1) the shift in the equilibrium geometry (known as nuclear relaxation), and (2) the change in the shape of the potential energy surface (known as curvature). Simple finite field methods have previously been developed for evaluating these static field contributions and also for determining the effect of nuclear relaxation on dynamic vibrational hyperpolarizabilities in the infinite frequency approximation. In this paper the finite field approach is extended to include, within the infinite frequency approximation, the effect of curvature on the major dynamic nonlinear optical processes
Resumo:
Electrical property derivative expressions are presented for the nuclear relaxation contribution to static and dynamic (infinite frequency approximation) nonlinear optical properties. For CF4 and SF6, as opposed to HF and CH4, a term that is quadratic in the vibrational anharmonicity (and not previously evaluated for any molecule) makes an important contribution to the static second vibrational hyperpolarizability of CF4 and SF6. A comparison between calculated and experimental values for the difference between the (anisotropic) Kerr effect and electric field induced second-harmonic generation shows that, at the Hartree-Fock level, the nuclear relaxation/infinite frequency approximation gives the correct trend (in the series CH4, CF4, SF6) but is of the order of 50% too small
Resumo:
In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.
Resumo:
We consider the problem of scattering of a time-harmonic acoustic incident plane wave by a sound soft convex polygon. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the computational cost required to achieve a prescribed level of accuracy grows linearly with respect to the frequency of the incident wave. Recently Chandler–Wilde and Langdon proposed a novel Galerkin boundary element method for this problem for which, by incorporating the products of plane wave basis functions with piecewise polynomials supported on a graded mesh into the approximation space, they were able to demonstrate that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency. Here we propose a related collocation method, using the same approximation space, for which we demonstrate via numerical experiments a convergence rate identical to that achieved with the Galerkin scheme, but with a substantially reduced computational cost.
Resumo:
The scattering of small amplitude water waves by a finite array of locally axisymmetric structures is considered. Regions of varying quiescent depth are included and their axisymmetric nature, together with a mild-slope approximation, permits an adaptation of well-known interaction theory which ultimately reduces the problem to a simple numerical calculation. Numerical results are given and effects due to regions of varying depth on wave loading and free-surface elevation are presented.
Resumo:
Simulations of the global atmosphere for weather and climate forecasting require fast and accurate solutions and so operational models use high-order finite differences on regular structured grids. This precludes the use of local refinement; techniques allowing local refinement are either expensive (eg. high-order finite element techniques) or have reduced accuracy at changes in resolution (eg. unstructured finite-volume with linear differencing). We present solutions of the shallow-water equations for westerly flow over a mid-latitude mountain from a finite-volume model written using OpenFOAM. A second/third-order accurate differencing scheme is applied on arbitrarily unstructured meshes made up of various shapes and refinement patterns. The results are as accurate as equivalent resolution spectral methods. Using lower order differencing reduces accuracy at a refinement pattern which allows errors from refinement of the mountain to accumulate and reduces the global accuracy over a 15 day simulation. We have therefore introduced a scheme which fits a 2D cubic polynomial approximately on a stencil around each cell. Using this scheme means that refinement of the mountain improves the accuracy after a 15 day simulation. This is a more severe test of local mesh refinement for global simulations than has been presented but a realistic test if these techniques are to be used operationally. These efficient, high-order schemes may make it possible for local mesh refinement to be used by weather and climate forecast models.
Resumo:
Under the bond scheme, a pre-determined series of payments would compensate farmers for lost revenues resulting from policy change. Unlike the Single Payment Scheme, payments would be fully decoupled: recipients would not have to retain farmland, or remain in agriculture. If vested in a paper asset, the guaranteed, unencumbered, income stream would be similar to that from a government bond. Recipients could exchange this for a capital sum reflecting the net present value of future payments, and reinvest in other business ventures, either on- or offfarm.With a finite, declining flow of payments, budget expenditure would reduce, releasing funds for other uses.
Resumo:
Negative correlations between task performance in dynamic control tasks and verbalizable knowledge, as assessed by a post-task questionnaire, have been interpreted as dissociations that indicate two antagonistic modes of learning, one being “explicit”, the other “implicit”. This paper views the control tasks as finite-state automata and offers an alternative interpretation of these negative correlations. It is argued that “good controllers” observe fewer different state transitions and, consequently, can answer fewer post-task questions about system transitions than can “bad controllers”. Two experiments demonstrate the validity of the argument by showing the predicted negative relationship between control performance and the number of explored state transitions, and the predicted positive relationship between the number of explored state transitions and questionnaire scores. However, the experiments also elucidate important boundary conditions for the critical effects. We discuss the implications of these findings, and of other problems arising from the process control paradigm, for conclusions about implicit versus explicit learning processes.
Resumo:
Two experiments examined the claim for distinct implicit and explicit learning modes in the artificial grammar-learning task (Reber, 1967, 1989). Subjects initially attempted to memorize strings of letters generated by a finite-state grammar and then classified new grammatical and nongrammatical strings. Experiment 1 showed that subjects' assessment of isolated parts of strings was sufficient to account for their classification performance but that the rules elicited in free report were not sufficient. Experiment 2 showed that performing a concurrent random number generation task under different priorities interfered with free report and classification performance equally. Furthermore, giving different groups of subjects incidental or intentional learning instructions did not affect classification or free report.