945 resultados para Feynman diagram
Resumo:
Regarding the Pauli principle in quantum field theory and in many-body quantum mechanics, Feynman advocated that Pauli's exclusion principle can be completely ignored in intermediate states of perturbation theory. He observed that all virtual processes (of the same order) that violate the Pauli principle cancel out. Feynman accordingly introduced a prescription, which is to disregard the Pauli principle in all intermediate processes. This ingenious trick is of crucial importance in the Feynman diagram technique. We show, however, an example in which Feynman's prescription fails. This casts doubts on the general validity of Feynman's prescription. [S1050-2947(99)04604-1].
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
We present both analytical and numerical results on the position of partition function zeros on the complex magnetic field plane of the q=2 state (Ising) and the q=3 state Potts model defined on phi(3) Feynman diagrams (thin random graphs). Our analytic results are based on the ideas of destructive interference of coexisting phases and low temperature expansions. For the case of the Ising model, an argument based on a symmetry of the saddle point equations leads us to a nonperturbative proof that the Yang-Lee zeros are located on the unit circle, although no circle theorem is known in this case of random graphs. For the q=3 state Potts model, our perturbative results indicate that the Yang-Lee zeros lie outside the unit circle. Both analytic results are confirmed by finite lattice numerical calculations.
Resumo:
We investigate the effect of different forms of relativistic spin coupling of constituent quarks in the nucleon electromagnetic form factors. The four-dimensional integrations in the two-loop Feynman diagram are reduced to the null-plane, such that the light-front wave function is introduced in the computation of the form factors. The neutron charge form factor is very sensitive to different choices of spin coupling schemes, once its magnetic moment is fitted to the experimental value. The scalar coupling between two quarks is preferred by the neutron data, when a reasonable fit of the proton magnetic momentum is found. (C) 2000 Elsevier Science B.V.
Resumo:
Different mathematical methods have been applied to obtain the analytic result for the massless triangle Feynman diagram yielding a sum of four linearly independent (LI) hypergeometric functions of two variables F-4. This result is not physically acceptable when it is embedded in higher loops, because all four hypergeometric functions in the triangle result have the same region of convergence and further integration means going outside those regions of convergence. We could go outside those regions by using the well-known analytic continuation formulas obeyed by the F-4, but there are at least two ways we can do this. Which is the correct one? Whichever continuation one uses, it reduces a number of F-4 from four to three. This reduction in the number of hypergeometric functions can be understood by taking into account the fundamental physical constraint imposed by the conservation of momenta flowing along the three legs of the diagram. With this, the number of overall LI functions that enter the most general solution must reduce accordingly. It remains to determine which set of three LI solutions needs to be taken. To determine the exact structure and content of the analytic solution for the three-point function that can be embedded in higher loops, we use the analogy that exists between Feynman diagrams and electric circuit networks, in which the electric current flowing in the network plays the role of the momentum flowing in the lines of a Feynman diagram. This analogy is employed to define exactly which three out of the four hypergeometric functions are relevant to the analytic solution for the Feynman diagram. The analogy is built based on the equivalence between electric resistance circuit networks of types Y and Delta in which flows a conserved current. The equivalence is established via the theorem of minimum energy dissipation within circuits having these structures.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.
Resumo:
In this thesis we study at perturbative level correlation functions of Wilson loops (and local operators) and their relations to localization, integrability and other quantities of interest as the cusp anomalous dimension and the Bremsstrahlung function. First of all we consider a general class of 1/8 BPS Wilson loops and chiral primaries in N=4 Super Yang-Mills theory. We perform explicit two-loop computations, for some particular but still rather general configuration, that confirm the elegant results expected from localization procedure. We find notably full consistency with the multi-matrix model averages, obtained from 2D Yang-Mills theory on the sphere, when interacting diagrams do not cancel and contribute non-trivially to the final answer. We also discuss the near BPS expansion of the generalized cusp anomalous dimension with L units of R-charge. Integrability provides an exact solution, obtained by solving a general TBA equation in the appropriate limit: we propose here an alternative method based on supersymmetric localization. The basic idea is to relate the computation to the vacuum expectation value of certain 1/8 BPS Wilson loops with local operator insertions along the contour. Also these observables localize on a two-dimensional gauge theory on S^2, opening the possibility of exact calculations. As a test of our proposal, we reproduce the leading Luscher correction at weak coupling to the generalized cusp anomalous dimension. This result is also checked against a genuine Feynman diagram approach in N=4 super Yang-Mills theory. Finally we study the cusp anomalous dimension in N=6 ABJ(M) theory, identifying a scaling limit in which the ladder diagrams dominate. The resummation is encoded into a Bethe-Salpeter equation that is mapped to a Schroedinger problem, exactly solvable due to the surprising supersymmetry of the effective Hamiltonian. In the ABJ case the solution implies the diagonalization of the U(N) and U(M) building blocks, suggesting the existence of two independent cusp anomalous dimensions and an unexpected exponentation structure for the related Wilson loops.
Resumo:
One of the main difficulties in studying quantum field theory, in the perturbative regime, is the calculation of D-dimensional Feynman integrals. In general, one introduces the so-called Feynman parameters and, associated with them, the cumbersome parametric integrals. Solving these integrals beyond the one-loop level can be a difficult task. The negative-dimensional integration method (NDIM) is a technique whereby such a problem is dramatically reduced. We present the calculation of two-loop integrals in three different cases: scalar ones with three different masses, massless with arbitrary tensor rank, with and N insertions of a two-loop diagram.
Resumo:
The negative-dimensional integration method (NDIM) is revealing itself as a very useful technique for computing massless and/or massive Feynman integrals, covariant and noncovanant alike. Up until now however, the illustrative calculations done using such method have been mostly covariant scalar integrals/without numerator factors. We show here how those integrals with tensorial structures also can be handled straightforwardly and easily. However, contrary to the absence of significant features in the usual approach, here the NDIM also allows us to come across surprising unsuspected bonuses. Toward this end, we present two alternative ways of working out the integrals and illustrate them by taking the easiest Feynman integrals in this category that emerge in the computation of a standard one-loop self-energy diagram. One of the novel and heretofore unsuspected bonuses is that there are degeneracies in the way one can express the final result for the referred Feynman integral.
Resumo:
Pós-graduação em Física - IFT
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.
Resumo:
The existence of the Macroscopic Fundamental Diagram (MFD), which relates network space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since the MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. The key requirements for the well-defined MFD is the homogeneity of the area wide traffic condition, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take drivers’ behaviour under real time information provision into account, which has a significant impact on the shape of the MFD. This research aims to demonstrate the impact of drivers’ route choice behaviour on network performance by employing the MFD as a measurement. A microscopic simulation is chosen as an experimental platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers as well as by taking different route choice parameters, various scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance and the MFD shape. This study confirmed and addressed the impact of information provision on the MFD shape and highlighted the significance of the route choice parameter setting as an influencing factor in the MFD analysis.
Resumo:
The existence of Macroscopic Fundamental Diagram (MFD), which relates space-mean density and flow, has been shown in urban networks under homogeneous traffic conditions. Since MFD represents the area-wide network traffic performances, studies on perimeter control strategies and an area traffic state estimation utilizing the MFD concept has been reported. One of the key requirements for well-defined MFD is the homogeneity of the area-wide traffic condition with links of similar properties, which is not universally expected in real world. For the practical application of the MFD concept, several researchers have identified the influencing factors for network homogeneity. However, they did not explicitly take the impact of drivers’ behaviour and information provision into account, which has a significant impact on simulation outputs. This research aims to demonstrate the effect of dynamic information provision on network performance by employing the MFD as a measurement. A microscopic simulation, AIMSUN, is chosen as an experiment platform. By changing the ratio of en-route informed drivers and pre-trip informed drivers different scenarios are simulated in order to investigate how drivers’ adaptation to the traffic congestion influences the network performance with respect to the MFD shape as well as other indicators, such as total travel time. This study confirmed the impact of information provision on the MFD shape, and addressed the usefulness of the MFD for measuring the dynamic information provision benefit.