982 resultados para integrable field theories
Resumo:
Nesta dissertação apresentamos um método de quantização matemática e conceitualmente rigoroso para o campo escalar livre de interações. Trazemos de início alguns aspéctos importantes da Teoria de Distribuições e colocamos alguns pontos de geometria Lorentziana. O restante do trabalho é dividido em duas partes: na primeira, estudamos equações de onda em variedades Lorentzianas globalmente hiperbólicas e apresentamos o conceito de soluções fundamentais no contexto de equações locais. Em seguida, progressivamente construímos soluções fundamentais para o operador de onda a partir da distribuição de Riesz. Uma vez estabelecida uma solução para a equação de onda em uma vizinhança de um ponto da variedade, tratamos de construir uma solução global a partir da extensão do problema de Cauchy a toda a variedade, donde as soluções fundamentais dão lugar aos operadores de Green a partir da introdução de uma condição de contorno. Na última parte do trabalho, apresentamos um mínimo da Teoria de Categorias e Funtores para utilizar esse formalismo na contrução de um funtor de segunda quantização entre a categoria de variedades Lorentzianas globalmente hiperbólicas e a categoria de redes de álgebras C* satisfazendo os axiomas de Haag-Kastler. Ao fim, retomamos o caso particular do campo escalar quântico livre.
Resumo:
In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the twodimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.
Resumo:
Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.
Resumo:
We apply the projected Gross-Pitaevskii equation (PGPE) formalism to the experimental problem of the shift in critical temperature T-c of a harmonically confined Bose gas as reported in Gerbier , Phys. Rev. Lett. 92, 030405 (2004). The PGPE method includes critical fluctuations and we find the results differ from various mean-field theories, and are in best agreement with experimental data. To unequivocally observe beyond mean-field effects, however, the experimental precision must either improve by an order of magnitude, or consider more strongly interacting systems. This is the first application of a classical field method to make quantitative comparison with experiment.
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
The conventional mechanism of fermion mass generation in the Standard Model involves Spontaneous Symmetry Breaking (SSB). In this thesis, we study an alternate mechanism for the generation of fermion masses that does not require SSB, in the context of lattice field theories. Being inherently strongly coupled, this mechanism requires a non-perturbative approach like the lattice approach.
In order to explore this mechanism, we study a simple lattice model with a four-fermion interaction that has massless fermions at weak couplings and massive fermions at strong couplings, but without any spontaneous symmetry breaking. Prior work on this type of mass generation mechanism in 4D, was done long ago using either mean-field theory or Monte-Carlo calculations on small lattices. In this thesis, we have developed a new computational approach that enables us to perform large scale quantum Monte-Carlo calculations to study the phase structure of this theory. In 4D, our results confirm prior results, but differ in some quantitative details of the phase diagram. In contrast, in 3D, we discover a new second order critical point using calculations on lattices up to size $ 60^3$. Such large scale calculations are unprecedented. The presence of the critical point implies the existence of an alternate mechanism of fermion mass generation without any SSB, that could be of interest in continuum quantum field theory.
Resumo:
We classify the N = 4 supersymmetric AdS(5) backgrounds that arise as solutions of five-dimensional N = 4 gauged supergravity. We express our results in terms of the allowed embedding tensor components and identify the structure of the associated gauge groups. We show that the moduli space of these AdS vacua is of the form SU(1, m)/ (U(1) x SU(m)) and discuss our results regarding holographically dual N = 2 SCFTs and their conformal manifolds.
Resumo:
The existence of genuinely non-geometric backgrounds, i.e. ones without geometric dual, is an important question in string theory. In this paper we examine this question from a sigma model perspective. First we construct a particular class of Courant algebroids as protobialgebroids with all types of geometric and non-geometric fluxes. For such structures we apply the mathematical result that any Courant algebroid gives rise to a 3D topological sigma model of the AKSZ type and we discuss the corresponding 2D field theories. It is found that these models are always geometric, even when both 2-form and 2-vector fields are neither vanishing nor inverse of one another. Taking a further step, we suggest an extended class of 3D sigma models, whose world volume is embedded in phase space, which allow for genuinely non-geometric backgrounds. Adopting the doubled formalism such models can be related to double field theory, albeit from a world sheet perspective.
Resumo:
Present the measurement of a rare Standard Model processes, pp →W±γγ for the leptonic decays of the W±. The measurement is made with 19.4 fb−1 of 8 TeV data collected in 2012 by the CMS experiment. The measured cross section is consistent with the Standard Model prediction and has a significance of 2.9σ. Limits are placed on dimension-8 Effective Field Theories of anomalous Quartic Gauge Couplings. The analysis has particularly sensitivity to the fT,0 coupling and a 95% confidence limit is placed at −35.9 < fT,0/Λ4< 36.7 TeV−4. Studies of the pp →Zγγ process are also presented. The Zγγ signal is in strict agreement with the Standard Model and has a significance of 5.9σ.
Resumo:
El presente trabajo se realizó con el objetivo de tener una visión completa de las teorías del liderazgo, teniendo de este una concepción como proceso y poder examinar las diversas formas de aplicación en las organizaciones contemporáneas. El tema es enfocado desde la perspectiva organizacional, un mundo igualmente complejo, sin desconocer su importancia en otros ámbitos como la educación, la política o la dirección del estado. Su enfoque tiene que ver con el estudio académico del cual es la culminación y se enmarca dentro de la perspectiva constitucional de la Carta Política Colombiana que reconoce la importancia capital que tienen la actividad económica y la iniciativa privada en la constitución de empresas. Las diversas visiones del liderazgo han sido aplicadas de distintas maneras en las organizaciones contemporáneas y han generado diversos resultados. Hoy, no es posible pensar en una organización que no haya definido su forma de liderazgo y en consecuencia, confluyen en el campo empresarial multitud de teorías, sin que pueda afirmarse que una sola de ellas permita el manejo adecuado y el cumplimiento de los objetivos misionales. Por esta razón se ha llegado a concebir el liderazgo como una función compleja, en un mundo donde las organizaciones mismas se caracterizan no solo por la complejidad de sus acciones y de su conformación, sino también porque esta característica pertenece también al mundo de la globalización. Las organizaciones concebidas como máquinas que en sentido metafórico logran reconstituirse sus estructuras a medida que están en interacción con otras en el mundo globalizado. Adaptarse a las cambiantes circunstancias hace de las organizaciones conglomerados en permanente dinámica y evolución. En este ámbito puede decirse que el liderazgo es también complejo y que es el liderazgo transformacional el que más se acerca al sentido de la complejidad.
Resumo:
We analyze the integrability properties of models defined on the symmetric space SU(2)/U(1) in 3 + 1 dimensions, using a recently proposed approach for integrable theories in any dimension. We point out the key ingredients for a theory to possess an infinite number of local conservation laws, and discuss classes of models with such property, We propose a 3 + 1-dimensional, relativistic invariant field theory possessing a toroidal soliton solution carrying a unit of topological charge given by the Hopf map. Construction of the action is guided by the requirement that the energy of static configuration should be scale invariant. The solution is constructed exactly. The model possesses an infinite number of local conserved currents. The method is also applied to the Skyrme-Faddeev model, and integrable submodels are proposed. (C) 1999 Elsevier B.V. B.V. All rights reserved.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.