30 resultados para KRONECKER SUM
em CaltechTHESIS
Resumo:
We perform a measurement of direct CP violation in b to s+gamma Acp, and the measurement of a difference between Acp for neutral B and charged B mesons, Delta A_{X_s\gamma}, using 429 inverse femtobarn of data recorded at the Upsilon(4S) resonance with the BABAR detector. B mesons are reconstructed from 16 exclusive final states. Particle identification is done using an algorithm based on Error Correcting Output Code with an exhaustive matrix. Background rejection and best candidate selection are done using two decision tree-based classifiers. We found $\acp = 1.73%+-1.93%+-1.02% and Delta A_X_sgamma = 4.97%+-3.90%+-1.45% where the uncertainties are statistical and systematic respectively. Based on the measured value of Delta A_X_sgamma, we determine a 90% confidence interval for Im C_8g/C_7gamma, where C_7gamma and C_8g are Wilson coefficients for New Physics amplitudes, at -1.64 < Im C_8g/C_7gamma < 6.52.
Resumo:
The spin dependent cross sections, σT1/2 and σT3/2 , and asymmetries, A∥ and A⊥ for 3He have been measured at the Jefferson Lab's Hall A facility. The inclusive scattering process 3He(e,e)X was performed for initial beam energies ranging from 0.86 to 5.1 GeV, at a scattering angle of 15.5°. Data includes measurements from the quasielastic peak, resonance region, and the deep inelastic regime. An approximation for the extended Gerasimov-Drell-Hearn integral is presented at a 4-momentum transfer Q2 of 0.2-1.0 GeV2.
Also presented are results on the performance of the polarized 3He target. Polarization of 3He was achieved by the process of spin-exchange collisions with optically pumped rubidium vapor. The 3He polarization was monitored using the NMR technique of adiabatic fast passage (AFP). The average target polarization was approximately 35% and was determined to have a systematic uncertainty of roughly ±4% relative.
Resumo:
The electron diffraction investigation of the following compounds has been carried out: sulfur, sulfur nitride, realgar, arsenic trisulfide, spiropentane, dimethyltrisulfide, cis and trans lewisite, methylal, and ethylene glycol.
The crystal structures of the following salts have been determined by x-ray diffraction: silver molybdateand hydrazinium dichloride.
Suggested revisions of the covalent radii for B, Si, P, Ge, As, Sn, Sb, and Pb have been made, and values for the covalent radii of Al, Ga, In, Ti, and Bi have been proposed.
The Schomaker-Stevenson revision of the additivity rule for single covalent bond distances has been used in conjunction with the revised radii. Agreement with experiment is in general better with the revised radii than with the former radii and additivity.
The principle of ionic bond character in addition to that present in a normal covalent bond has been applied to the observed structures of numerous molecules. It leads to a method of interpretation which is at least as consistent as the theory of multiple bond formation.
The revision of the additivity rule has been extended to double bonds. An encouraging beginning along these lines has been made, but additional experimental data are needed for clarification.
Resumo:
In Part I, a method for finding solutions of certain diffusive dispersive nonlinear evolution equations is introduced. The method consists of a straightforward iteration procedure, applied to the equation as it stands (in most cases), which can be carried out to all terms, followed by a summation of the resulting infinite series, sometimes directly and other times in terms of traces of inverses of operators in an appropriate space.
We first illustrate our method with Burgers' and Thomas' equations, and show how it quickly leads to the Cole-Hopft transformation, which is known to linearize these equations.
We also apply this method to the Korteweg and de Vries, nonlinear (cubic) Schrödinger, Sine-Gordon, modified KdV and Boussinesq equations. In all these cases the multisoliton solutions are easily obtained and new expressions for some of them follow. More generally we show that the Marcenko integral equations, together with the inverse problem that originates them, follow naturally from our expressions.
Only solutions that are small in some sense (i.e., they tend to zero as the independent variable goes to ∞) are covered by our methods. However, by the study of the effect of writing the initial iterate u_1 = u_(1)(x,t) as a sum u_1 = ^∼/u_1 + ^≈/u_1 when we know the solution which results if u_1 = ^∼/u_1, we are led to expressions that describe the interaction of two arbitrary solutions, only one of which is small. This should not be confused with Backlund transformations and is more in the direction of performing the inverse scattering over an arbitrary “base” solution. Thus we are able to write expressions for the interaction of a cnoidal wave with a multisoliton in the case of the KdV equation; these expressions are somewhat different from the ones obtained by Wahlquist (1976). Similarly, we find multi-dark-pulse solutions and solutions describing the interaction of envelope-solitons with a uniform wave train in the case of the Schrodinger equation.
Other equations tractable by our method are presented. These include the following equations: Self-induced transparency, reduced Maxwell-Bloch, and a two-dimensional nonlinear Schrodinger. Higher order and matrix-valued equations with nonscalar dispersion functions are also presented.
In Part II, the second Painleve transcendent is treated in conjunction with the similarity solutions of the Korteweg-de Vries equat ion and the modified Korteweg-de Vries equation.
Resumo:
Let l be any odd prime, and ζ a primitive l-th root of unity. Let C_l be the l-Sylow subgroup of the ideal class group of Q(ζ). The Teichmüller character w : Z_l → Z^*_l is given by w(x) = x (mod l), where w(x) is a p-1-st root of unity, and x ∈ Z_l. Under the action of this character, C_l decomposes as a direct sum of C^((i))_l, where C^((i))_l is the eigenspace corresponding to w^i. Let the order of C^((3))_l be l^h_3). The main result of this thesis is the following: For every n ≥ max( 1, h_3 ), the equation x^(ln) + y^(ln) + z^(ln) = 0 has no integral solutions (x,y,z) with l ≠ xyz. The same result is also proven with n ≥ max(1,h_5), under the assumption that C_l^((5)) is a cyclic group of order l^h_5. Applications of the methods used to prove the above results to the second case of Fermat's last theorem and to a Fermat-like equation in four variables are given.
The proof uses a series of ideas of H.S. Vandiver ([Vl],[V2]) along with a theorem of M. Kurihara [Ku] and some consequences of the proof of lwasawa's main conjecture for cyclotomic fields by B. Mazur and A. Wiles [MW]. In [V1] Vandiver claimed that the first case of Fermat's Last Theorem held for l if l did not divide the class number h^+ of the maximal real subfield of Q(e^(2πi/i)). The crucial gap in Vandiver's attempted proof that has been known to experts is explained, and complete proofs of all the results used from his papers are given.
Resumo:
Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.
Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.
The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.
Resumo:
In this thesis, we test the electroweak sector of the Standard Model of particle physics through the measurements of the cross section of the simultaneous production of the neutral weak boson Z and photon γ, and the limits on the anomalous Zγγ and ZZγ triple gauge couplings h3 and h4 with the Z decaying to leptons (electrons and muons). We analyze events collected in proton-proton collisions at center of mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 5.0 inverse femtobarn. The analyzed events were recorded by the Compact Muon Solenoid detector at the Large Hadron Collider in 2011.
The production cross section has been measured for hard photons with transverse momentum greater than 15 GeV that are separated from the the final state leptons in the eta-phi plane by Delta R greater than 0.7, whose sum of the transverse energy of hadrons over the transverse energy of the photon in a cone around the photon with Delta R less than 0.3 is less than 0.5, and with the invariant mass of the dilepton system greater than 50 GeV. The measured cross section value is 5.33 +/- 0.08 (stat.) +/- 0.25 (syst.) +/- 0.12 (lumi.) picobarn. This is compatible with the Standard Model prediction that includes next-to-leading-order QCD contributions: 5.45 +/- 0.27 picobarn.
The measured 95 % confidence-level upper limits on the absolute values of the anomalous couplings h3 and h4 are 0.01 and 8.8E-5 for the Zγγ interactions, and, 8.6E-3 and 8.0E-5 for the ZZγ interactions. These values are also compatible with the Standard Model where they vanish in the tree-level approximation. They extend the sensitivity of the 2012 results from the ATLAS collaboration based on 1.02 inverse femtobarn of data by a factor of 2.4 to 3.1.
Resumo:
The connections between convexity and submodularity are explored, for purposes of minimizing and learning submodular set functions.
First, we develop a novel method for minimizing a particular class of submodular functions, which can be expressed as a sum of concave functions composed with modular functions. The basic algorithm uses an accelerated first order method applied to a smoothed version of its convex extension. The smoothing algorithm is particularly novel as it allows us to treat general concave potentials without needing to construct a piecewise linear approximation as with graph-based techniques.
Second, we derive the general conditions under which it is possible to find a minimizer of a submodular function via a convex problem. This provides a framework for developing submodular minimization algorithms. The framework is then used to develop several algorithms that can be run in a distributed fashion. This is particularly useful for applications where the submodular objective function consists of a sum of many terms, each term dependent on a small part of a large data set.
Lastly, we approach the problem of learning set functions from an unorthodox perspective---sparse reconstruction. We demonstrate an explicit connection between the problem of learning set functions from random evaluations and that of sparse signals. Based on the observation that the Fourier transform for set functions satisfies exactly the conditions needed for sparse reconstruction algorithms to work, we examine some different function classes under which uniform reconstruction is possible.
Resumo:
This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.
As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.
One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.
Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.
Resumo:
With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.
Resumo:
This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.
The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.
The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.
Resumo:
We report measurements of the proton form factors, G^p_E and G^p_M, extracted from elastic electron scattering in the range 1 ≤ Q^2 ≤ 3 (GeV/c)^2 with uncertainties of <15% in G^p_E and <3% in G^p_M. The results for G^p_E are somewhat larger than indicated by most theoretical parameterizations. The ratio of Pauli and Dirac form factors, Q^2(F^p_2/F^p_1), is lower in value and demonstrates less Q^2 dependence than these parameterizations have indicated. Comparisons are made to theoretical models, including those based on perturbative QCD, vector-meson dominance, QCD sum rules, and diquark constituents to the proton. A global extraction of the form factors, including previous elastic scattering measurements, is also presented.
Resumo:
We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.
Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.
In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.
Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.