925 resultados para dS vacua in string theory
Resumo:
Chapter 1 is used to introduce the basic tools and mechanics used within this thesis. Most of the definitions used in the thesis will be defined, and we provide a basic survey of topics in graph theory and design theory pertinent to the topics studied in this thesis. In Chapter 2, we are concerned with the study of fixed block configuration group divisible designs, GDD(n; m; k; λ1; λ2). We study those GDDs in which each block has configuration (s; t), that is, GDDs in which each block has exactly s points from one of the two groups and t points from the other. Chapter 2 begins with an overview of previous results and constructions for small group size and block sizes 3, 4 and 5. Chapter 2 is largely devoted to presenting constructions and results about GDDs with two groups and block size 6. We show the necessary conditions are sufficient for the existence of GDD(n, 2, 6; λ1, λ2) with fixed block configuration (3; 3). For configuration (1; 5), we give minimal or nearminimal index constructions for all group sizes n ≥ 5 except n = 10, 15, 160, or 190. For configuration (2, 4), we provide constructions for several families ofGDD(n, 2, 6; λ1, λ2)s. Chapter 3 addresses characterizing (3, r)-regular graphs. We begin with providing previous results on the well studied class of (2, r)-regular graphs and some results on the structure of large (t; r)-regular graphs. In Chapter 3, we completely characterize all (3, 1)-regular and (3, 2)-regular graphs, as well has sharpen existing bounds on the order of large (3, r)- regular graphs of a certain form for r ≥ 3. Finally, the appendix gives computational data resulting from Sage and C programs used to generate (3, 3)-regular graphs on less than 10 vertices.
Resumo:
Stochastic models for three-dimensional particles have many applications in applied sciences. Lévy–based particle models are a flexible approach to particle modelling. The structure of the random particles is given by a kernel smoothing of a Lévy basis. The models are easy to simulate but statistical inference procedures have not yet received much attention in the literature. The kernel is not always identifiable and we suggest one approach to remedy this problem. We propose a method to draw inference about the kernel from data often used in local stereology and study the performance of our approach in a simulation study.
Resumo:
By using observables that only depend on charged particles (tracks), one can efficiently suppress pileup contamination at the LHC. Such measurements are not infrared safe in perturbation theory, so any calculation of track-based observables must account for hadronization effects. We develop a formalism to perform these calculations in QCD, by matching partonic cross sections onto new nonperturbative objects called track functions which absorb infrared divergences. The track function Ti(x) describes the energy fraction x of a hard parton i which is converted into charged hadrons. We give a field-theoretic definition of the track function and derive its renormalization group evolution, which is in excellent agreement with the pythia parton shower. We then perform a next-to-leading order calculation of the total energy fraction of charged particles in e+e−→ hadrons. To demonstrate the implications of our framework for the LHC, we match the pythia parton shower onto a set of track functions to describe the track mass distribution in Higgs plus one jet events. We also show how to reduce smearing due to hadronization fluctuations by measuring dimensionless track-based ratios.
Resumo:
We revise the SU(3)-invariant sector of N = 8 supergravity with dyonic SO(8) gaugings. By using the embedding tensor formalism, analytic expressions for the scalar potential, superpotential(s) and fermion mass terms are obtained as a function of the electromagnetic phase ω and the scalars in the theory. Equipped with these results, we explore non-supersymmetric AdS critical points at ω ≠ 0 for which perturbative stability could not be analysed before. The ω-dependent superpotential is then used to derive first-order flow equations and obtain new BPS domain-wall solutions at ω ≠ 0. We numerically look at steepest-descent paths motivated by the (conjectured) RG flows.
Resumo:
Who in the European Union drives the process of pursuing bilateral trade negotiations? In contrast to societal explanations, this article develops a novel argument as to how the European Commission manages the process and uses its position in strategic ways to pursue its interests. Rooted in principal–agent theory, the article discusses agent preferences and theorizes the conditions under which the agent sets specific focal points and interacts strategically with principals and third parties. The argument is discussed with case study evidence drawn from the first trade agreement concluded and ratified since the EU Commission announced its new strategy in 2006: the EU–South Korea trade agreement
Resumo:
Logical theories for representing knowledge are often plagued by the so-called Logical Omniscience Problem. The problem stems from the clash between the desire to model rational agents, which should be capable of simple logical inferences, and the fact that any logical inference, however complex, almost inevitably consists of inference steps that are simple enough. This contradiction points to the fruitlessness of trying to solve the Logical Omniscience Problem qualitatively if the rationality of agents is to be maintained. We provide a quantitative solution to the problem compatible with the two important facets of the reasoning agent: rationality and resource boundedness. More precisely, we provide a test for the logical omniscience problem in a given formal theory of knowledge. The quantitative measures we use are inspired by the complexity theory. We illustrate our framework with a number of examples ranging from the traditional implicit representation of knowledge in modal logic to the language of justification logic, which is capable of spelling out the internal inference process. We use these examples to divide representations of knowledge into logically omniscient and not logically omniscient, thus trying to determine how much information about the reasoning process needs to be present in a theory to avoid logical omniscience.