6 resultados para Supersymmetric Effective Theories
em CaltechTHESIS
Resumo:
Since the discovery of D-branes as non-perturbative, dynamic objects in string theory, various configurations of branes in type IIA/B string theory and M-theory have been considered to study their low-energy dynamics described by supersymmetric quantum field theories.
One example of such a construction is based on the description of Seiberg-Witten curves of four-dimensional N = 2 supersymmetric gauge theories as branes in type IIA string theory and M-theory. This enables us to study the gauge theories in strongly-coupled regimes. Spectral networks are another tool for utilizing branes to study non-perturbative regimes of two- and four-dimensional supersymmetric theories. Using spectral networks of a Seiberg-Witten theory we can find its BPS spectrum, which is protected from quantum corrections by supersymmetry, and also the BPS spectrum of a related two-dimensional N = (2,2) theory whose (twisted) superpotential is determined by the Seiberg-Witten curve. When we don’t know the perturbative description of such a theory, its spectrum obtained via spectral networks is a useful piece of information. In this thesis we illustrate these ideas with examples of the use of Seiberg-Witten curves and spectral networks to understand various two- and four-dimensional supersymmetric theories.
First, we examine how the geometry of a Seiberg-Witten curve serves as a useful tool for identifying various limits of the parameters of the Seiberg-Witten theory, including Argyres-Seiberg duality and Argyres-Douglas fixed points. Next, we consider the low-energy limit of a two-dimensional N = (2, 2) supersymmetric theory from an M-theory brane configuration whose (twisted) superpotential is determined by the geometry of the branes. We show that, when the two-dimensional theory flows to its infra-red fixed point, particular cases realize Kazama-Suzuki coset models. We also study the BPS spectrum of an Argyres-Douglas type superconformal field theory on the Coulomb branch by using its spectral networks. We provide strong evidence of the equivalence of superconformal field theories from different string-theoretic constructions by comparing their BPS spectra.
Resumo:
The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.
In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.
N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.
Resumo:
This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The topological phases of matter have been a major part of condensed matter physics research since the discovery of the quantum Hall effect in the 1980s. Recently, much of this research has focused on the study of systems of free fermions, such as the integer quantum Hall effect, quantum spin Hall effect, and topological insulator. Though these free fermion systems can play host to a variety of interesting phenomena, the physics of interacting topological phases is even richer. Unfortunately, there is a shortage of theoretical tools that can be used to approach interacting problems. In this thesis I will discuss progress in using two different numerical techniques to study topological phases.
Recently much research in topological phases has focused on phases made up of bosons. Unlike fermions, free bosons form a condensate and so interactions are vital if the bosons are to realize a topological phase. Since these phases are difficult to study, much of our understanding comes from exactly solvable models, such as Kitaev's toric code, as well as Levin-Wen and Walker-Wang models. We may want to study systems for which such exactly solvable models are not available. In this thesis I present a series of models which are not solvable exactly, but which can be studied in sign-free Monte Carlo simulations. The models work by binding charges to point topological defects. They can be used to realize bosonic interacting versions of the quantum Hall effect in 2D and topological insulator in 3D. Effective field theories of "integer" (non-fractionalized) versions of these phases were available in the literature, but our models also allow for the construction of fractional phases. We can measure a number of properties of the bulk and surface of these phases.
Few interacting topological phases have been realized experimentally, but there is one very important exception: the fractional quantum Hall effect (FQHE). Though the fractional quantum Hall effect we discovered over 30 years ago, it can still produce novel phenomena. Of much recent interest is the existence of non-Abelian anyons in FQHE systems. Though it is possible to construct wave functions that realize such particles, whether these wavefunctions are the ground state is a difficult quantitative question that must be answered numerically. In this thesis I describe progress using a density-matrix renormalization group algorithm to study a bilayer system thought to host non-Abelian anyons. We find phase diagrams in terms of experimentally relevant parameters, and also find evidence for a non-Abelian phase known as the "interlayer Pfaffian".
Resumo:
Theoretical and experimental studies were conducted to investigate the wave induced oscillations in an arbitrary shaped harbor with constant depth which is connected to the open-sea.
A theory termed the “arbitrary shaped harbor” theory is developed. The solution of the Helmholtz equation, ∇2f + k2f = 0, is formulated as an integral equation; an approximate method is employed to solve the integral equation by converting it to a matrix equation. The final solution is obtained by equating, at the harbor entrance, the wave amplitude and its normal derivative obtained from the solutions for the regions outside and inside the harbor.
Two special theories called the circular harbor theory and the rectangular harbor theory are also developed. The coordinates inside a circular and a rectangular harbor are separable; therefore, the solution for the region inside these harbors is obtained by the method of separation of variables. For the solution in the open-sea region, the same method is used as that employed for the arbitrary shaped harbor theory. The final solution is also obtained by a matching procedure similar to that used for the arbitrary shaped harbor theory. These two special theories provide a useful analytical check on the arbitrary shaped harbor theory.
Experiments were conducted to verify the theories in a wave basin 15 ft wide by 31 ft long with an effective system of wave energy dissipators mounted along the boundary to simulate the open-sea condition.
Four harbors were investigated theoretically and experimentally: circular harbors with a 10° opening and a 60° opening, a rectangular harbor, and a model of the East and West Basins of Long Beach Harbor located in Long Beach, California.
Theoretical solutions for these four harbors using the arbitrary shaped harbor theory were obtained. In addition, the theoretical solutions for the circular harbors and the rectangular harbor using the two special theories were also obtained. In each case, the theories have proven to agree well with the experimental data.
It is found that: (1) the resonant frequencies for a specific harbor are predicted correctly by the theory, although the amplification factors at resonance are somewhat larger than those found experimentally,(2) for the circular harbors, as the width of the harbor entrance increases, the amplification at resonance decreases, but the wave number bandwidth at resonance increases, (3) each peak in the curve of entrance velocity vs incident wave period corresponds to a distinct mode of resonant oscillation inside the harbor, thus the velocity at the harbor entrance appears to be a good indicator for resonance in harbors of complicated shape, (4) the results show that the present theory can be applied with confidence to prototype harbors with relatively uniform depth and reflective interior boundaries.