930 resultados para Conformal invariance
Resumo:
There are two competing models of our universe right now. One is Big Bang with inflation cosmology. The other is the cyclic model with ekpyrotic phase in each cycle. This paper is divided into two main parts according to these two models. In the first part, we quantify the potentially observable effects of a small violation of translational invariance during inflation, as characterized by the presence of a preferred point, line, or plane. We explore the imprint such a violation would leave on the cosmic microwave background anisotropy, and provide explicit formulas for the expected amplitudes $\langle a_{lm}a_{l'm'}^*\rangle$ of the spherical-harmonic coefficients. We then provide a model and study the two-point correlation of a massless scalar (the inflaton) when the stress tensor contains the energy density from an infinitely long straight cosmic string in addition to a cosmological constant. Finally, we discuss if inflation can reconcile with the Liouville's theorem as far as the fine-tuning problem is concerned. In the second part, we find several problems in the cyclic/ekpyrotic cosmology. First of all, quantum to classical transition would not happen during an ekpyrotic phase even for superhorizon modes, and therefore the fluctuations cannot be interpreted as classical. This implies the prediction of scale-free power spectrum in ekpyrotic/cyclic universe model requires more inspection. Secondly, we find that the usual mechanism to solve fine-tuning problems is not compatible with eternal universe which contains infinitely many cycles in both direction of time. Therefore, all fine-tuning problems including the flatness problem still asks for an explanation in any generic cyclic models.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
Fundamental studies of magnetic alignment of highly anisotropic mesostructures can enable the clean-room-free fabrication of flexible, array-based solar and electronic devices, in which preferential orientation of nano- or microwire-type objects is desired. In this study, ensembles of 100 micron long Si microwires with ferromagnetic Ni and Co coatings are oriented vertically in the presence of magnetic fields. The degree of vertical alignment and threshold field strength depend on geometric factors, such as microwire length and ferromagnetic coating thickness, as well as interfacial interactions, which are modulated by varying solvent and substrate surface chemistry. Microwire ensembles with vertical alignment over 97% within 10 degrees of normal, as measured by X-ray diffraction, are achieved over square cm scale areas and set into flexible polymer films. A force balance model has been developed as a predictive tool for magnetic alignment, incorporating magnetic torque and empirically derived surface adhesion parameters. As supported by these calculations, microwires are shown to detach from the surface and align vertically in the presence of magnetic fields on the order of 100 gauss. Microwires aligned in this manner are set into a polydimethylsiloxane film where they retain their vertical alignment after the field has been removed and can subsequently be used as a flexible solar absorber layer. Finally, these microwires arrays can be protected for use in electrochemical cells by the conformal deposition of a graphene layer.
Resumo:
We study some aspects of conformal field theory, wormhole physics and two-dimensional random surfaces. Inspite of being rather different, these topics serve as examples of the issues that are involved, both at high and low energy scales, in formulating a quantum theory of gravity. In conformal field theory we show that fusion and braiding properties can be used to determine the operator product coefficients of the non-diagonal Wess-Zumino-Witten models. In wormhole physics we show how Coleman's proposed probability distribution would result in wormholes determining the value of θQCD. We attempt such a calculation and find the most probable value of θQCD to be π. This hints at a potential conflict with nature. In random surfaces we explore the behaviour of conformal field theories coupled to gravity and calculate some partition functions and correlation functions. Our results throw some light on the transition that is believed to occur when the central charge of the matter theory gets larger than one.
Resumo:
We present a complete system for Spectral Cauchy characteristic extraction (Spectral CCE). Implemented in C++ within the Spectral Einstein Code (SpEC), the method employs numerous innovative algorithms to efficiently calculate the Bondi strain, news, and flux.
Spectral CCE was envisioned to ensure physically accurate gravitational wave-forms computed for the Laser Interferometer Gravitational wave Observatory (LIGO) and similar experiments, while working toward a template bank with more than a thousand waveforms to span the binary black hole (BBH) problem’s seven-dimensional parameter space.
The Bondi strain, news, and flux are physical quantities central to efforts to understand and detect astrophysical gravitational wave sources within the Simulations of eXtreme Spacetime (SXS) collaboration, with the ultimate aim of providing the first strong field probe of the Einstein field equation.
In a series of included papers, we demonstrate stability, convergence, and gauge invariance. We also demonstrate agreement between Spectral CCE and the legacy Pitt null code, while achieving a factor of 200 improvement in computational efficiency.
Spectral CCE represents a significant computational advance. It is the foundation upon which further capability will be built, specifically enabling the complete calculation of junk-free, gauge-free, and physically valid waveform data on the fly within SpEC.
Resumo:
I. Crossing transformations constitute a group of permutations under which the scattering amplitude is invariant. Using Mandelstem's analyticity, we decompose the amplitude into irreducible representations of this group. The usual quantum numbers, such as isospin or SU(3), are "crossing-invariant". Thus no higher symmetry is generated by crossing itself. However, elimination of certain quantum numbers in intermediate states is not crossing-invariant, and higher symmetries have to be introduced to make it possible. The current literature on exchange degeneracy is a manifestation of this statement. To exemplify application of our analysis, we show how, starting with SU(3) invariance, one can use crossing and the absence of exotic channels to derive the quark-model picture of the tensor nonet. No detailed dynamical input is used.
II. A dispersion relation calculation of the real parts of forward π±p and K±p scattering amplitudes is carried out under the assumption of constant total cross sections in the Serpukhov energy range. Comparison with existing experimental results as well as predictions for future high energy experiments are presented and discussed. Electromagnetic effects are found to be too small to account for the expected difference between the π-p and π+p total cross sections at higher energies.
Resumo:
In this thesis we are concerned with finding representations of the algebra of SU(3) vector and axial-vector charge densities at infinite momentum (the "current algebra") to describe the mesons, idealizing the real continua of multiparticle states as a series of discrete resonances of zero width. Such representations would describe the masses and quantum numbers of the mesons, the shapes of their Regge trajectories, their electromagnetic and weak form factors, and (approximately, through the PCAC hypothesis) pion emission or absorption amplitudes.
We assume that the mesons have internal degrees of freedom equivalent to being made of two quarks (one an antiquark) and look for models in which the mass is SU(3)-independent and the current is a sum of contributions from the individual quarks. Requiring that the current algebra, as well as conditions of relativistic invariance, be satisfied turns out to be very restrictive, and, in fact, no model has been found which satisfies all requirements and gives a reasonable mass spectrum. We show that using more general mass and current operators but keeping the same internal degrees of freedom will not make the problem any more solvable. In particular, in order for any two-quark solution to exist it must be possible to solve the "factorized SU(2) problem," in which the currents are isospin currents and are carried by only one of the component quarks (as in the K meson and its excited states).
In the free-quark model the currents at infinite momentum are found using a manifestly covariant formalism and are shown to satisfy the current algebra, but the mass spectrum is unrealistic. We then consider a pair of quarks bound by a potential, finding the current as a power series in 1/m where m is the quark mass. Here it is found impossible to satisfy the algebra and relativistic invariance with the type of potential tried, because the current contributions from the two quarks do not commute with each other to order 1/m3. However, it may be possible to solve the factorized SU(2) problem with this model.
The factorized problem can be solved exactly in the case where all mesons have the same mass, using a covariant formulation in terms of an internal Lorentz group. For a more realistic, nondegenerate mass there is difficulty in covariantly solving even the factorized problem; one model is described which almost works but appears to require particles of spacelike 4-momentum, which seem unphysical.
Although the search for a completely satisfactory model has been unsuccessful, the techniques used here might eventually reveal a working model. There is also a possibility of satisfying a weaker form of the current algebra with existing models.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
The time distribution of the decays of an initially pure K° beam into π+π-π° has been analyzed to determine the complex parameter W (also known as Ƞ+-° and (x + iy)). The K° beam was produced in a brass target by the interactions of a 2.85 GeV/c π- beam which was generated on an internal target in the Lawrence Radiation Laboratory (LRL) Bevatron. The counters and hodoscopes in the apparatus selected for events with a neutral (K°) produced in the brass target, two charged secondaries passing through a magnet spectrometer and a ɣ-ray shower in a shower hodoscope.
From the 275K apparatus triggers, 148 K → π+π-π° events were isolated. The presence of a ɣ-ray shower in the optical shower chambers and a two-prong vee in the optical spark chambers were devices used to isolate the events. The backgrounds were further reduced by reconstructing the momenta of the two charged secondaries and applying kinematic constraints.
The best fit to the final sample of 148 events distributed between .3 and 7.0 KS lifetimes gives:
ReW = -.05 ±.17
ImW = +.39 +.35/-.37
This result is consistent with both CPT invariance (ReW = 0) and CP invariance (W = 0). Backgrounds are estimated to be less than 10% and systematic effects have also been estimated to be negligible.
An analysis of the present data on CP violation in this decay mode and other K° decay modes has estimated the phase of ɛ to be 45.3 ± 2.3 degrees. This result is consistent with the super weak theories of CP violation which predicts the phase of ɛ to be 43°. This estimate is in turn used to predict the phase of Ƞ°° to be 48.0 ± 7.9 degrees. This is a substantial improvement on presently available measurements. The largest error in this analysis comes from the present limits on W from the world average of recent experiments. The K → πuʋ mode produces the next largest error. Therefore further experimentation in these modes would be useful.
Resumo:
We have measured differential cross-sections for the two-body photodisintegration of Helium-3, ɣ + He3 → p + d, between incident photon energies of 200 and 600 MeV, and for center of mass frame angles between 30° and 150°. Both final state particles were detected in arrays of wire spark chambers and scintillation counters; the high momentum particle was analyzed in a magnet spectrometer. The results are interpreted in terms of amplitudes to produce the ∆(1236) resonance in an intermediate state, as well as non-resonant amplitudes. This experiment, together with an (unfinished) experiment on the inverse reaction, p + d → He3 + ɣ, will provide a reciprocity test of time reversal invariance.
Resumo:
We study the entanglement in a chain of harmonic oscillators driven out of equilibrium by preparing the two sides of the system at different temperatures, and subsequently joining them together. The steady state is constructed explicitly and the logarithmic negativity is calculated between two adjacent segments of the chain. We find that, for low temperatures, the steady-state entanglement is a sum of contributions pertaining to left-and right-moving excitations emitted from the two reservoirs. In turn, the steady-state entanglement is a simple average of the Gibbs-state values and thus its scaling can be obtained from conformal field theory. A similar averaging behaviour is observed during the entire time evolution. As a particular case, we also discuss a local quench where both sides of the chain are initialized in their respective ground states.
Resumo:
Most assessments of fish stocks use some measure of the reproductive potential of a population, such as spawning biomass. However, the correlation between spawning biomass and reproductive potential is not always strong, and it likely is weakest in the tropics and subtropics, where species tend to exhibit indeterminate fecundity and release eggs in batches over a protracted spawning season. In such cases, computing annual reproductive output requires estimates of batch fecundity and the annual number of batches—the latter subject to spawning frequency and duration of spawning season. Batch fecundity is commonly measured by age (or size), but these other variables are not. Without the relevant data, the annual number of batches is assumed to be invariant across age. We reviewed the literature and found that this default assumption lacks empirical support because both spawning duration and spawning frequency generally increase with age or size. We demonstrate effects of this assumption on measures of reproductive value and spawning potential ratio, a metric commonly used to gauge stock status. Model applications showed substantial sensitivity to age dependence in the annual number of batches. If the annual number of batches increases with age but is incorrectly assumed to be constant, stock assessment models would tend to overestimate the biological reference points used for setting harvest rates. This study underscores the need to better understand the age- or size-dependent contrast in the annual number of batches, and we conclude that, for species without evidence to support invariance, the default assumption should be replaced with one that accounts for age- or size-dependence.
Resumo:
In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence "reillumination" algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature. © Springer-Verlag Berlin Heidelberg 2006.
Resumo:
The long term goal of our work is to enable rapid prototyping design optimization to take place on geometries of arbitrary size in a spirit of a real time computer game. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver and post-processing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. This work has shown that it is possible to eliminate all serial bottlenecks from the CED Process. This paper reports further progress towards our goal; in particular we report on the generation of viscous layer meshes to bridge the body to the flow across the cut-cells. The Level Set formulation, which underpins the geometry representation, is used as a natural mechanism to allow rapid construction of conformal layer meshes. The guiding principle is to construct the mesh which most closely approximates the body but remains solvable. This apparently novel approach is described and examples given.
Resumo:
The application of automated design optimization to real-world, complex geometry problems is a significant challenge - especially if the topology is not known a priori like in turbine internal cooling. The long term goal of our work is to focus on an end-to-end integration of the whole CFD Process, from solid model through meshing, solving and post-processing to enable this type of design optimization to become viable & practical. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut- Cartesian mesh generator, RANS flow solver, post-processing & geometry editing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh guided by the underpinning Level Set. This paper extends this work still further with a simple scoping study showing how the basic functionality can be scripted & automated and then used as the basis for automated optimization of a generic gas turbine cooling geometry. Copyright © 2008 by W.N.Dawes.