954 resultados para Hellmann-Feynman theorem
Resumo:
Optical properties of a two-dimensional square-lattice photonic crystal are systematically investigated within the partial bandgap through anisotropic characteristics analysis and numerical simulation of field pattern. Using the plane-wave expansion method and Hellmann-Feynman theorem, the relationships between the incident and refracted angles for both phase and group velocities are calculated to analyze light propagation from air to photonic crystals. Three kinds of flat slab focusing are summarized and demonstrated by numerical simulations using the multiple scattering method. (c) 2007 Optical Society of America
Resumo:
We describe an empirical, self-consistent, orthogonal tight-binding model for zirconia, which allows for the polarizability of the anions at dipole and quadrupole levels and for crystal field splitting of the cation d orbitals, This is achieved by mixing the orbitals of different symmetry on a site with coupling coefficients driven by the Coulomb potentials up to octapole level. The additional forces on atoms due to the self-consistency and polarizabilities are exactly obtained by straightforward electrostatics, by analogy with the Hellmann-Feynman theorem as applied in first-principles calculations. The model correctly orders the zero temperature energies of all zirconia polymorphs. The Zr-O matrix elements of the Hamiltonian, which measure covalency, make a greater contribution than the polarizability to the energy differences between phases. Results for elastic constants of the cubic and tetragonal phases and phonon frequencies of the cubic phase are also presented and compared with some experimental data and first-principles calculations. We suggest that the model will be useful for studying finite temperature effects by means of molecular dynamics.
Resumo:
Various modern nucleon-nucleon (NN) potentials yield a very accurate fit to the nucleon-nucleon scattering phase shifts. The differences between these interactions in describing properties of nuclear matter are investigated. Various contributions to the total energy are evaluated employing the Hellmann-Feynman theorem. Special attention is paid to the two-nucleon correlation functions derived from these interactions. Differences in the predictions of the various interactions can be traced back to the inclusion of nonlocal terms.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The force constants of H2 and Li2 are evaluated employing their extended Hartree-Fock wavefunctions by a polynomial fit of their force curves. It is suggested that, based on incomplete multiconfiguration Hartree-Fock wavefunctions, force constants calculated from the energy derivatives are numerically more accurate than those obtained from the derivatives of the Hellmann-Feynman forces. It is observed that electrons relax during the nuclear vibrations in such a fashion as to facilitate the nuclear motions.
Resumo:
Part I
Several approximate Hartree-Fock SCF wavefunctions for the ground electronic state of the water molecule have been obtained using an increasing number of multicenter s, p, and d Slater-type atomic orbitals as basis sets. The predicted charge distribution has been extensively tested at each stage by calculating the electric dipole moment, molecular quadrupole moment, diamagnetic shielding, Hellmann-Feynman forces, and electric field gradients at both the hydrogen and the oxygen nuclei. It was found that a carefully optimized minimal basis set suffices to describe the electronic charge distribution adequately except in the vicinity of the oxygen nucleus. Our calculations indicate, for example, that the correct prediction of the field gradient at this nucleus requires a more flexible linear combination of p-orbitals centered on this nucleus than that in the minimal basis set. Theoretical values for the molecular octopole moment components are also reported.
Part II
The perturbation-variational theory of R. M. Pitzer for nuclear spin-spin coupling constants is applied to the HD molecule. The zero-order molecular orbital is described in terms of a single 1s Slater-type basis function centered on each nucleus. The first-order molecular orbital is expressed in terms of these two functions plus one singular basis function each of the types e-r/r and e-r ln r centered on one of the nuclei. The new kinds of molecular integrals were evaluated to high accuracy using numerical and analytical means. The value of the HD spin-spin coupling constant calculated with this near-minimal set of basis functions is JHD = +96.6 cps. This represents an improvement over the previous calculated value of +120 cps obtained without using the logarithmic basis function but is still considerably off in magnitude compared with the experimental measurement of JHD = +43 0 ± 0.5 cps.
Resumo:
The altered spontaneous emission of an emitter near an arbitrary body can be elucidated using an energy balance of the electromagnetic field. From a classical point of view it is trivial to show that the field scattered back from any body should alter the emission of the source. But it is not at all apparent that the total radiative and non-radiative decay in an arbitrary body can add to the vacuum decay rate of the emitter (i.e.) an increase of emission that is just as much as the body absorbs and radiates in all directions. This gives us an opportunity to revisit two other elegant classical ideas of the past, the optical theorem and the Wheeler-Feynman absorber theory of radiation. It also provides us alternative perspectives of Purcell effect and generalizes many of its manifestations, both enhancement and inhibition of emission. When the optical density of states of a body or a material is difficult to resolve (in a complex geometry or a highly inhomogeneous volume) such a generalization offers new directions to solutions. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We present both analytical and numerical results on the position of partition function zeros on the complex magnetic field plane of the q=2 state (Ising) and the q=3 state Potts model defined on phi(3) Feynman diagrams (thin random graphs). Our analytic results are based on the ideas of destructive interference of coexisting phases and low temperature expansions. For the case of the Ising model, an argument based on a symmetry of the saddle point equations leads us to a nonperturbative proof that the Yang-Lee zeros are located on the unit circle, although no circle theorem is known in this case of random graphs. For the q=3 state Potts model, our perturbative results indicate that the Yang-Lee zeros lie outside the unit circle. Both analytic results are confirmed by finite lattice numerical calculations.
Resumo:
This thesis is concerned with calculations in manifestly Lorentz-invariant baryon chiral perturbation theory beyond order D=4. We investigate two different methods. The first approach consists of the inclusion of additional particles besides pions and nucleons as explicit degrees of freedom. This results in the resummation of an infinite number of higher-order terms which contribute to higher-order low-energy constants in the standard formulation. In this thesis the nucleon axial, induced pseudoscalar, and pion-nucleon form factors are investigated. They are first calculated in the standard approach up to order D=4. Next, the inclusion of the axial-vector meson a_1(1260) is considered. We find three diagrams with an axial-vector meson which are relevant to the form factors. Due to the applied renormalization scheme, however, the contributions of the two loop diagrams vanish and only a tree diagram contributes explicitly. The appearing coupling constant is fitted to experimental data of the axial form factor. The inclusion of the axial-vector meson results in an improved description of the axial form factor for higher values of momentum transfer. The contributions to the induced pseudoscalar form factor, however, are negligible for the considered momentum transfer, and the axial-vector meson does not contribute to the pion-nucleon form factor. The second method consists in the explicit calculation of higher-order diagrams. This thesis describes the applied renormalization scheme and shows that all symmetries and the power counting are preserved. As an application we determine the nucleon mass up to order D=6 which includes the evaluation of two-loop diagrams. This is the first complete calculation in manifestly Lorentz-invariant baryon chiral perturbation theory at the two-loop level. The numerical contributions of the terms of order D=5 and D=6 are estimated, and we investigate their pion-mass dependence. Furthermore, the higher-order terms of the nucleon sigma term are determined with the help of the Feynman-Hellmann theorem.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.
Resumo:
The thesis presents a probabilistic approach to the theory of semigroups of operators, with particular attention to the Markov and Feller semigroups. The first goal of this work is the proof of the fundamental Feynman-Kac formula, which gives the solution of certain parabolic Cauchy problems, in terms of the expected value of the initial condition computed at the associated stochastic diffusion processes. The second target is the characterization of the principal eigenvalue of the generator of a semigroup with Markov transition probability function and of second order elliptic operators with real coefficients not necessarily self-adjoint. The thesis is divided into three chapters. In the first chapter we study the Brownian motion and some of its main properties, the stochastic processes, the stochastic integral and the Itô formula in order to finally arrive, in the last section, at the proof of the Feynman-Kac formula. The second chapter is devoted to the probabilistic approach to the semigroups theory and it is here that we introduce Markov and Feller semigroups. Special emphasis is given to the Feller semigroup associated with the Brownian motion. The third and last chapter is divided into two sections. In the first one we present the abstract characterization of the principal eigenvalue of the infinitesimal generator of a semigroup of operators acting on continuous functions over a compact metric space. In the second section this approach is used to study the principal eigenvalue of elliptic partial differential operators with real coefficients. At the end, in the appendix, we gather some of the technical results used in the thesis in more details. Appendix A is devoted to the Sion minimax theorem, while in appendix B we prove the Chernoff product formula for not necessarily self-adjoint operators.
Resumo:
Volume measurements are useful in many branches of science and medicine. They are usually accomplished by acquiring a sequence of cross sectional images through the object using an appropriate scanning modality, for example x-ray computed tomography (CT), magnetic resonance (MR) or ultrasound (US). In the cases of CT and MR, a dividing cubes algorithm can be used to describe the surface as a triangle mesh. However, such algorithms are not suitable for US data, especially when the image sequence is multiplanar (as it usually is). This problem may be overcome by manually tracing regions of interest (ROIs) on the registered multiplanar images and connecting the points into a triangular mesh. In this paper we describe and evaluate a new discreet form of Gauss’ theorem which enables the calculation of the volume of any enclosed surface described by a triangular mesh. The volume is calculated by summing the vector product of the centroid, area and normal of each surface triangle. The algorithm was tested on computer-generated objects, US-scanned balloons, livers and kidneys and CT-scanned clay rocks. The results, expressed as the mean percentage difference ± one standard deviation were 1.2 ± 2.3, 5.5 ± 4.7, 3.0 ± 3.2 and −1.2 ± 3.2% for balloons, livers, kidneys and rocks respectively. The results compare favourably with other volume estimation methods such as planimetry and tetrahedral decomposition.
Resumo:
This paper discusses how fundamentals of number theory, such as unique prime factorization and greatest common divisor can be made accessible to secondary school students through spreadsheets. In addition, the three basic multiplicative functions of number theory are defined and illustrated through a spreadsheet environment. Primes are defined simply as those natural numbers with just two divisors. One focus of the paper is to show the ease with which spreadsheets can be used to introduce students to some basics of elementary number theory. Complete instructions are given to build a spreadsheet to enable the user to input a positive integer, either with a slider or manually, and see the prime decomposition. The spreadsheet environment allows students to observe patterns, gain structural insight, form and test conjectures, and solve problems in elementary number theory.
Resumo:
This article lays down the foundations of the renormalization group (RG) approach for differential equations characterized by multiple scales. The renormalization of constants through an elimination process and the subsequent derivation of the amplitude equation [Chen, Phys. Rev. E 54, 376 (1996)] are given a rigorous but not abstract mathematical form whose justification is based on the implicit function theorem. Developing the theoretical framework that underlies the RG approach leads to a systematization of the renormalization process and to the derivation of explicit closed-form expressions for the amplitude equations that can be carried out with symbolic computation for both linear and nonlinear scalar differential equations and first order systems but independently of their particular forms. Certain nonlinear singular perturbation problems are considered that illustrate the formalism and recover well-known results from the literature as special cases. © 2008 American Institute of Physics.