272 resultados para Canonical momenta
Resumo:
Different mathematical methods have been applied to obtain the analytic result for the massless triangle Feynman diagram yielding a sum of four linearly independent (LI) hypergeometric functions of two variables F-4. This result is not physically acceptable when it is embedded in higher loops, because all four hypergeometric functions in the triangle result have the same region of convergence and further integration means going outside those regions of convergence. We could go outside those regions by using the well-known analytic continuation formulas obeyed by the F-4, but there are at least two ways we can do this. Which is the correct one? Whichever continuation one uses, it reduces a number of F-4 from four to three. This reduction in the number of hypergeometric functions can be understood by taking into account the fundamental physical constraint imposed by the conservation of momenta flowing along the three legs of the diagram. With this, the number of overall LI functions that enter the most general solution must reduce accordingly. It remains to determine which set of three LI solutions needs to be taken. To determine the exact structure and content of the analytic solution for the three-point function that can be embedded in higher loops, we use the analogy that exists between Feynman diagrams and electric circuit networks, in which the electric current flowing in the network plays the role of the momentum flowing in the lines of a Feynman diagram. This analogy is employed to define exactly which three out of the four hypergeometric functions are relevant to the analytic solution for the Feynman diagram. The analogy is built based on the equivalence between electric resistance circuit networks of types Y and Delta in which flows a conserved current. The equivalence is established via the theorem of minimum energy dissipation within circuits having these structures.
Resumo:
The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a 'blind injection challenge' similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs' angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M circle dot + 10M circle dot (50M circle dot + 50M circle dot) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to similar to 15% for 50M circle dot + 50M circle dot BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A description is provided of the software algorithms developed for the CMS tracker both for reconstructing charged-particle trajectories in proton-proton interactions and for using the resulting tracks to estimate the positions of the LHC luminous region and individual primary-interaction vertices. Despite the very hostile environment at the LHC, the performance obtained with these algorithms is found to be excellent. For t (t) over bar events under typical 2011 pileup conditions, the average track-reconstruction efficiency for promptly-produced charged particles with transverse momenta of p(T) > 0.9GeV is 94% for pseudorapidities of vertical bar eta vertical bar < 0.9 and 85% for 0.9 < vertical bar eta vertical bar < 2.5. The inefficiency is caused mainly by hadrons that undergo nuclear interactions in the tracker material. For isolated muons, the corresponding efficiencies are essentially 100%. For isolated muons of p(T) = 100GeV emitted at vertical bar eta vertical bar < 1.4, the resolutions are approximately 2.8% in p(T), and respectively, 10 m m and 30 mu m in the transverse and longitudinal impact parameters. The position resolution achieved for reconstructed primary vertices that correspond to interesting pp collisions is 10-12 mu m in each of the three spatial dimensions. The tracking and vertexing software is fast and flexible, and easily adaptable to other functions, such as fast tracking for the trigger, or dedicated tracking for electrons that takes into account bremsstrahlung.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Let epsilon be a commutative ring with identity and P is an element of epsilon[x] be a polynomial. In the present paper we consider digit representations in the residue class ring epsilon[x]/(P). In particular, we are interested in the question whether each A is an element of epsilon[x]/(P) can be represented modulo P in the form e(0)+ e(1)x + ... + e(h)x(h), where the e(i) is an element of epsilon[x]/(P) are taken from a fixed finite set of digits. This general concept generalizes both canonical number systems and digit systems over finite fields. Due to the fact that we do not assume that 0 is an element of the digit set and that P need not be monic, several new phenomena occur in this context.
Resumo:
We present a succinct review of the canonical formalism of classical mechanics, followed by a brief review of the main representations of quantum mechanics. We emphasize the formal similarities between the corresponding equations. We notice that these similarities contributed to the formulation of quantum mechanics. Of course, the driving force behind the search of any new physics is based on experimental evidence
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The development in recent years of different media formats has boosted the consumption of narratives, generating a ‘narrative hunger’. Audiences have increasingly looked forward to absorb new and old narratives, and ‘adaptation’ has become a key operational concept to describe processes involved in the transformation of texts. Thus, our discussion will be centered around a few theoretical propositions on adaptation and appropriation in various textual architectures. Although relevant to the debate, literary canonical texts will not be the primary focus. Non-canonical texts will be used to re-visit concepts such as narrativization, intertextuality and transmediality and also to elaborate some ideas on interactivity and multimedia crossover.
Resumo:
Studies investigating the relationship between literature and film have been largely oriented by an analysis vector which always departs from literary texts towards films. Moreover, the overwhelming majority of criticism done by renowned theorists such as Robert Stam and Brian McFarlane approaches almost exclusively texts considered canonical. This reveals an overemphasis on the notion that the “primordial” text in a study of adaptation should be the literary text. This essay discusses some of those concepts, challenging the “binary” models in adaptation studies and showing how the vectors of analysis can be usefully reversed, for example, starting from films to literature and to other textual architectures. This approach, shared by theorists such as Linda Hutcheon (2006) and Thomas Leitch (2007), rejects old notions that guided comparisons between literary and filmic texts, such as fidelity and equivalence, replacing them with intertextuality and transmedia storytelling.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)