444 resultados para antipodal vertices
Resumo:
There are several centrality measures that have been introduced and studied for real world networks. They account for the different vertex characteristics that permit them to be ranked in order of importance in the network. Betweenness centrality is a measure of the influence of a vertex over the flow of information between every pair of vertices under the assumption that information primarily flows over the shortest path between them. In this paper we present betweenness centrality of some important classes of graphs.
Resumo:
In this article we have explicitly determined all the 2-dimensional weak pseudomanifolds on 7 vertices. We have proved that there are (up to isomorphism) 13 such weak pseudomanifolds. The geometric carriers of them are 6 topological spaces, three of which are not manifolds.
Resumo:
Small covers were introduced by Davis and Januszkiewicz in 1991. We introduce the notion of equilibrium triangulations for small covers. We study equilibrium and vertex minimal 4-equivariant triangulations of 2-dimensional small covers. We discuss vertex minimal equilibrium triangulations of RP3#RP3, S-1 x RP2 and a nontrivial S-1 bundle over RP2. We construct some nice equilibrium triangulations of the real projective space RPn with 2(n) + n 1 vertices. The main tool is the theory of small covers.
Resumo:
We generalize the Faddeev-Jackiw canonical path integral quantization for the scenario of a Jacobian with J=1 to that for the general scenario of non-unit Jacobian, give the representation of the quantum transition amplitude with symplectic variables and obtain the generating functionals of the Green function and connected Green function. We deduce the unified expression of the symplectic field variable functions in terms of the Green function or the connected Green function with external sources. Furthermore, we generally get generating functionals of the general proper vertices of any n-points cases under the conditions of considering and not considering Grassmann variables, respectively; they are regular and are the simplest forms relative to the usual field theory.
Resumo:
The problem of constructing consistent parity-violating interactions for spin-3 gauge fields is considered in Minkowski space. Under the assumptions of locality, Poincaré invariance, and parity noninvariance, we classify all the nontrivial perturbative deformations of the Abelian gauge algebra. In space-time dimensions n=3 and n=5, deformations of the free theory are obtained which make the gauge algebra non-Abelian and give rise to nontrivial cubic vertices in the Lagrangian, at first order in the deformation parameter g. At second order in g, consistency conditions are obtained which the five-dimensional vertex obeys, but which rule out the n=3 candidate. Moreover, in the five-dimensional first-order deformation case, the gauge transformations are modified by a new term which involves the second de Wit-Freedman connection in a simple and suggestive way. © 2006 The American Physical Society.
Resumo:
In evaluating an interconnection network, it is indispensable to estimate the size of the maximal connected components of the underlying graph when the network begins to lose processors. Hypercube is one of the most popular interconnection networks. This article addresses the maximal connected components of an n -dimensional cube with faulty processors. We first prove that an n -cube with a set F of at most 2n - 3 failing processors has a component of size greater than or equal to2(n) - \F\ - 1. We then prove that an n -cube with a set F of at most 3n - 6 missing processors has a component of size greater than or equal to2(n) - \F\ - 2.
Resumo:
evaluating the fault tolerance of an interconnection network, it is essential to estimate the size of a maximal connected component of the network at the presence of faulty processors. Hypercube is one of the most popular interconnection networks. In this paper, we prove that for ngreater than or equal to6, an n-dimensional cube with a set F of at most (4n-10) failing processors has a component of size greater than or equal to2"-\F-3. This result demonstrates the superiority of hypercube in terms of the fault tolerance.
Resumo:
Hypercube is one of the most popular topologies for connecting processors in multicomputer systems. In this paper we address the maximum order of a connected component in a faulty cube. The results established include several known conclusions as special cases. We conclude that the hypercube structure is resilient as it includes a large connected component in the presence of large number of faulty vertices.
Resumo:
In order to make a full evaluation of an interconnection network, it is essential to estimate the minimum size of a largest connected component of this network provided the faulty vertices in the network may break its connectedness. Star graphs are recognized as promising candidates for interconnection networks. This article addresses the size of a largest connected component of a faulty star graph. We prove that, in an n-star graph (n >= 3) with up to 2n-4 faulty vertices, all fault-free vertices but at most two form a connected component. Moreover, all fault-free vertices but exactly two form a connected component if and only if the set of all faulty vertices is equal to the neighbourhood of a pair of fault-free adjacent vertices. These results show that star graphs exhibit excellent fault-tolerant abilities in the sense that there exists a large functional network in a faulty star graph.
Resumo:
Supersymmetric extensions of the standard model exhibiting bilinear R-parity violation can generate naturally the observed neutrino mass spectrum as well as mixings. One interesting feature of these scenarios is that the lightest supersymmetric particle (LSP) is unstable, with several of its decay properties predicted in terms of neutrino mixing angles. A smoking gun of this model in colliders is the presence of displaced vertices due to LSP decays in large parts of the parameter space. In this work we focus on the simplest model of this type that comes from minimal supergravity with universal R-parity conserving soft breaking of supersymmetry augmented with bilinear R-parity breaking terms at the electroweak scale (RmSUGRA). We evaluate the potential of the Fermilab Tevatron to probe the RmSUGRA parameters through the analysis of events possessing two displaced vertices stemming from LSP decays. We show that requiring two displaced vertices in the events leads to a reach in m(1/2) twice the one in the usual multilepton signals in a large fraction of the parameter space.
Resumo:
Supersymmetric theories with bilinear R-parity violation can give rise to the observed neutrino masses and mixings. One important feature of such models is that the lightest supersymmetric particle might have a sufficiently large lifetime to produce detached vertices. Working in the framework of supergravity models, we analyze the potential of the LHCb experiment to search for supersymmetric models exhibiting bilinear R-parity violation. We show that the LHCb experiment can probe a large fraction of the m(0)circle times m(1/2), being able to explore gluino masses up to 1.3 TeV. The LHCb discover potential for these kinds of models is similar to the ATLAS and CMS ones in the low luminosity phase of operation of the LHC.
Resumo:
It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.