419 resultados para VERTICES
Resumo:
We analyse the hVV (V = W, Z) vertex in a model independent way using Vh production. To that end, we consider possible corrections to the Standard Model Higgs Lagrangian, in the form of higher dimensional operators which parametrise the effects of new physics. In our analysis, we pay special attention to linear observables that can be used to probe CP violation in the same. By considering the associated production of a Higgs boson with a vector boson (W or Z), we use jet substructure methods to define angular observables which are sensitive to new physics effects, including an asymmetry which is linearly sensitive to the presence of CP odd effects. We demonstrate how to use these observables to place bounds on the presence of higher dimensional operators, and quantify these statements using a log likelihood analysis. Our approach allows one to probe separately the hZZ and hWW vertices, involving arbitrary combinations of BSM operators, at the Large Hadron Collider.
Resumo:
We formulate a natural model of loops and isolated vertices for arbitrary planar graphs, which we call the monopole-dimer model. We show that the partition function of this model can be expressed as a determinant. We then extend the method of Kasteleyn and Temperley-Fisher to calculate the partition function exactly in the case of rectangular grids. This partition function turns out to be a square of a polynomial with positive integer coefficients when the grid lengths are even. Finally, we analyse this formula in the infinite volume limit and show that the local monopole density, free energy and entropy can be expressed in terms of well-known elliptic functions. Our technique is a novel determinantal formula for the partition function of a model of isolated vertices and loops for arbitrary graphs.
Resumo:
A triangulation of a closed 2-manifold is tight with respect to a field of characteristic two if and only if it is neighbourly; and it is tight with respect to a field of odd characteristic if and only if it is neighbourly and orientable. No such characterization of tightness was previously known for higher dimensional manifolds. In this paper, we prove that a triangulation of a closed 3-manifold is tight with respect to a field of odd characteristic if and only if it is neighbourly, orientable and stacked. In consequence, the Kuhnel-Lutz conjecture is valid in dimension three for fields of odd characteristic. Next let F be a field of characteristic two. It is known that, in this case, any neighbourly and stacked triangulation of a closed 3-manifold is F-tight. For closed, triangulated 3-manifolds with at most 71 vertices or with first Betti number at most 188, we show that the converse is true. But the possibility of the existence of an F-tight, non-stacked triangulation on a larger number of vertices remains open. We prove the following upper bound theorem on such triangulations. If an F-tight triangulation of a closed 3-manifold has n vertices and first Betti number beta(1), then (n - 4) (617n - 3861) <= 15444 beta(1). Equality holds here if and only if all the vertex links of the triangulation are connected sums of boundary complexes of icosahedra. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a new method for local key and chord estimation from audio signals. This method relies primarily on principles from music theory, and does not require any training on a corpus of labelled audio files. A harmonic content of the musical piece is first extracted by computing a set of chroma vectors. A set of chord/key pairs is selected for every frame by correlation with fixed chord and key templates. An acyclic harmonic graph is constructed with these pairs as vertices, using a musical distance to weigh its edges. Finally, the sequences of chords and keys are obtained by finding the best path in the graph using dynamic programming. The proposed method allows a mutual chord and key estimation. It is evaluated on a corpus composed of Beatles songs for both the local key estimation and chord recognition tasks, as well as a larger corpus composed of songs taken from the Billboard dataset.
Resumo:
142 p.
Resumo:
We have successfully extended our implicit hybrid finite element/volume (FE/FV) solver to flows involving two immiscible fluids. The solver is based on the segregated pressure correction or projection method on staggered unstructured hybrid meshes. An intermediate velocity field is first obtained by solving the momentum equations with the matrix-free implicit cell-centered FV method. The pressure Poisson equation is solved by the node-based Galerkin FE method for an auxiliary variable. The auxiliary variable is used to update the velocity field and the pressure field. The pressure field is carefully updated by taking into account the velocity divergence field. This updating strategy can be rigorously proven to be able to eliminate the unphysical pressure boundary layer and is crucial for the correct temporal convergence rate. Our current staggered-mesh scheme is distinct from other conventional ones in that we store the velocity components at cell centers and the auxiliary variable at vertices. The fluid interface is captured by solving an advection equation for the volume fraction of one of the fluids. The same matrix-free FV method, as the one used for momentum equations, is used to solve the advection equation. We will focus on the interface sharpening strategy to minimize the smearing of the interface over time. We have developed and implemented a global mass conservation algorithm that enforces the conservation of the mass for each fluid.
Resumo:
The model dependence inherent in hadronic calculations is one of the dominant sources of uncertainty in the theoretical prediction of the anomalous magnetic moment of the muon. In this thesis, we focus on the charged pion contribution and turn a critical eye on the models employed in the few previous calculations of $a_\mu^{\pi^+\pi^-}$. Chiral perturbation theory provides a check on these models at low energies, and we therefore calculate the charged pion contribution to light-by-light (LBL) scattering to $\mathcal{O}(p^6)$. We show that the dominant corrections to the leading order (LO) result come from two low energy constants which show up in the form factors for the $\gamma\pi\pi$ and $\gamma\gamma\pi\pi$ vertices. Comparison with the existing models reveal a potentially significant omission - none include the pion polarizability corrections associated with the $\gamma\gamma\pi\pi$ vertex. We next consider alternative models where the pion polarizability is produced through exchange of the $a_1$ axial vector meson. These have poor UV behavior, however, making them unsuited for the $a_\mu^{\pi^+\pi^-}$ calculation. We turn to a simpler form factor modeling approach, generating two distinct models which reproduce the pion polarizability corrections at low energies, have the correct QCD scaling at high energies, and generate finite contributions to $a_\mu^{\pi^+\pi^-}$. With these two models, we calculate the charged pion contribution to the anomalous magnetic moment of the muon, finding values larger than those previously reported: $a_\mu^\mathrm{I} = -1.779(4)\times10^{-10}\,,\,a_\mu^\mathrm{II} = -4.892(3)\times10^{-10}$.
Resumo:
O cálculo da área de poligonais geodésicas é um desafio matemático instigante. Como calcular a área de uma poligonal sobre o elipsóide, se seus lados não possuem parametrização conhecida? Alguns trabalhos já foram desenvolvidos no intuito de solucionar este problema, empregando, em sua maioria, sistemas projetivos equivalentes ou aproximações sobre esferas autálicas. Tais métodos aproximam a superfície de referência elipsoidal por outras de mais fácil tratamento matemático, porém apresentam limitação de emprego, pois uma única superfície não poderia ser empregada para todo o planeta, sem comprometer os cálculos realizados sobre ela. No Código de Processo Civil, Livro IV, Título I, Capítulo VIII, Seção III artigo 971 diz, em seu parágrafo único, que não havendo impugnação, o juiz determinará a divisão geodésica do imóvel. Além deste, existe ainda a Lei 10.267/2001, que regula a obrigatoriedade, para efetivação de registro, dos vértices definidores dos limites dos imóveis rurais terem suas coordenadas georreferenciadas ao Sistema Geodésico Brasileiro (SGB), sendo que áreas de imóveis menores que quatro módulos fiscais terão garantida isenção de custos financeiros.Este trabalho visa fornecer uma metodologia de cálculo de áreas para poligonais geodésicas, ou loxodrômicas, diretamente sobre o elipsóide, bem como fornecer um programa que execute as rotinas elaboradas nesta dissertação. Como a maioria dos levantamentos geodésicos é realizada usando rastreadores GPS, a carga dos dados é pautada em coordenadas (X, Y, Z), empregando o Sistema Geodésico WGS-84, fornecendo a área geodésica sem a necessidade de um produto tipo SIG. Para alcançar o objetivo deste trabalho, foi desenvolvida parametrização diferente da abordagem clássica da Geodésia geométrica, para transformar as coordenadas (X, Y, Z) em geodésicas.
Resumo:
In Part I, the common belief that fermions lying on linear trajectories must have opposite-parity partners is shown to be false. Reggeization of a sequence of positive-parity fermion resonance is carried out in the Van Hove model. As a consequence of the absence of negative-parity states, the partial-wave amplitudes must have a fixed cut in the J plane. This fixed cut, in conjunction with the moving Regge pole, provides a new parametrization for fermion-exchange reactions, which is in qualitative agreement with the data.
In Part II, the spin structure of three particle vertices is determined from the quark model. Using these SU(6)W vertices in the Van Hove model, we derive a Reggeized scattering amplitude. In addition to Regge poles there are necessarily fixed Regge cuts in both fermion and boson exchange amplitudes. These fixed cuts are similar to those found in Part I, and may be viewed as a consequence of the absence of parity doubled quarks. The magnitudes of the pole and cut terms in an entire class of SU(6) related reactions are determined by their magnitudes in a single reaction. As an example we explain the observed presence or absence of wrong-signature nonsense dips in a class of reactions involving vector meson exchange.
Resumo:
Em muitas representações de objetos ou sistemas físicos se faz necessário a utilização de técnicas de redução de dimensionalidade que possibilitam a análise dos dados em baixas dimensões, capturando os parâmetros essenciais associados ao problema. No contexto de aprendizagem de máquina esta redução se destina primordialmente à clusterização, reconhecimento e reconstrução de sinais. Esta tese faz uma análise meticulosa destes tópicos e suas conexões que se encontram em verdadeira ebulição na literatura, sendo o mapeamento de difusão o foco principal deste trabalho. Tal método é construído a partir de um grafo onde os vértices são os sinais (dados do problema) e o peso das arestas é estabelecido a partir do núcleo gaussiano da equação do calor. Além disso, um processo de Markov é estabelecido o que permite a visualização do problema em diferentes escalas conforme variação de um determinado parâmetro t: Um outro parâmetro de escala, Є, para o núcleo gaussiano é avaliado com cuidado relacionando-o com a dinâmica de Markov de forma a poder aprender a variedade que eventualmente seja o suporte do dados. Nesta tese é proposto o reconhecimento de imagens digitais envolvendo transformações de rotação e variação de iluminação. Também o problema da reconstrução de sinais é atacado com a proposta de pré-imagem utilizando-se da otimização de uma função custo com um parâmetro regularizador, γ, que leva em conta também o conjunto de dados iniciais.
Resumo:
Neste trabalho, foram calculados os fatores de forma e as constantes de acoplamento dos vértices mesônicos J/ψ DsDs, J/ψ Ds*Ds e J/ψ Ds*Ds*usando a técnica das regras de soma da QCD (RSQCD) até a ordem 5 da OPE. Estes três vértices estão envolvidos em algumas das numerosas hipóteses que tentam explicar a estrutura interna de alguns mésons charmosos exóticos que começaram a ser observados a partir de 2003. Tais mésons não se encaixam no espectro do charmonium e/ou apresentam números quânticos exóticos dentro do modelo CQM (constituent quark model). Um exemplo é o méson Y(4140), cujo decaimento observado é no par J/ψφ enquanto o esperado seria que tivesse decaimento predominante em mésons com open charm, devido à sua massa. Uma das propostas para se entender este méson consiste em estudá-lo como um estado molecular Ds*ar{D}s*, de modo que seu decaimento seria Y(4140) → Ds* ar{D}s* → J/ψφ. Neste processo, aparecerão os vértices de interação estudados neste trabalho, de maneira que o conhecimento mais preciso de seus fatores de forma e de suas constantes de acoplamento pode beneficiar a compreensão sobre a constituição fundamental do Y(4140) assim como a de outros novos estados como o X(4350), Y(4274) e Y(4660) por exemplo. Foram considerados neste trabalho, todos os casos off-shell possíveis para cada um dos três vértices, obtendo assim dois fatores de forma distintos para o vértice J/ψ DsDs, três para o vértice J/ψ Ds*Ds e dois para o vértice J/ψ Ds* Ds*. Nestes três vértices, os fatores de forma para o caso J/ψ off-shell foram bem ajustados por curvas monopolares enquanto os casos Ds e Ds* foram ajustados por curvas exponenciais, o que está de acordo com o comportamento encontrado em trabalhos anteriores do grupo. Os cálculos das constantes de acoplamento tiveram como resultados: g_{J/ψ Ds Ds} = 5.98^{+0.67}_{ -0.58}, g_{J/ψ D*s Ds} = 4.30_{+0.41}^{-0.35}GeV^{-1} e g_{J/ψ Ds* Ds*} = 7.47^{+1.04}_{-0.71}, resultados estes que estão compatíveis com os trabalhos anteriores que utilizaram as RSQCD para o cálculo das constantes de acoplamento dos vértices J/ψ D(*)D(*).
Resumo:
Neste trabalho calculamos os fatores de forma e constantes de acoplamento para os vértices B*sBK, B*BsK e BsBK* usando as Regras de Soma da Cromodinâmica Quântica (QCD). Ainda estão incluídos os diagramas não perturbativos. Nós usamos a técnica de considerar dois mésons fora da camada de massa para obter dois fatores de forma diferentes a fim de diminuir as incertezas. Os cálculos das incertezas foram incluídos neste trabalho.
Resumo:
An infinite series of twofold, two-way weavings of the cube, corresponding to 'wrappings', or double covers of the cube, is described with the aid of the two-parameter Goldberg- Coxeter construction. The strands of all such wrappings correspond to the central circuits (CCs) of octahedrites (four-regular polyhedral graphs with square and triangular faces), which for the cube necessarily have octahedral symmetry. Removing the symmetry constraint leads to wrappings of other eight-vertex convex polyhedra. Moreover, wrappings of convex polyhedra with fewer vertices can be generated by generalizing from octahedrites to i-hedrites, which additionally include digonal faces. When the strands of a wrapping correspond to the CCs of a four-regular graph that includes faces of size greater than 4, non-convex 'crinkled' wrappings are generated. The various generalizations have implications for activities as diverse as the construction of woven-closed baskets and the manufacture of advanced composite components of complex geometry. © 2012 The Royal Society.
Resumo:
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).
Resumo:
Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the `Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects