15 resultados para VERTICES
em CentAUR: Central Archive University of Reading - UK
Resumo:
In evaluating an interconnection network, it is indispensable to estimate the size of the maximal connected components of the underlying graph when the network begins to lose processors. Hypercube is one of the most popular interconnection networks. This article addresses the maximal connected components of an n -dimensional cube with faulty processors. We first prove that an n -cube with a set F of at most 2n - 3 failing processors has a component of size greater than or equal to2(n) - \F\ - 1. We then prove that an n -cube with a set F of at most 3n - 6 missing processors has a component of size greater than or equal to2(n) - \F\ - 2.
Resumo:
evaluating the fault tolerance of an interconnection network, it is essential to estimate the size of a maximal connected component of the network at the presence of faulty processors. Hypercube is one of the most popular interconnection networks. In this paper, we prove that for ngreater than or equal to6, an n-dimensional cube with a set F of at most (4n-10) failing processors has a component of size greater than or equal to2"-\F-3. This result demonstrates the superiority of hypercube in terms of the fault tolerance.
Resumo:
Hypercube is one of the most popular topologies for connecting processors in multicomputer systems. In this paper we address the maximum order of a connected component in a faulty cube. The results established include several known conclusions as special cases. We conclude that the hypercube structure is resilient as it includes a large connected component in the presence of large number of faulty vertices.
Resumo:
In order to make a full evaluation of an interconnection network, it is essential to estimate the minimum size of a largest connected component of this network provided the faulty vertices in the network may break its connectedness. Star graphs are recognized as promising candidates for interconnection networks. This article addresses the size of a largest connected component of a faulty star graph. We prove that, in an n-star graph (n >= 3) with up to 2n-4 faulty vertices, all fault-free vertices but at most two form a connected component. Moreover, all fault-free vertices but exactly two form a connected component if and only if the set of all faulty vertices is equal to the neighbourhood of a pair of fault-free adjacent vertices. These results show that star graphs exhibit excellent fault-tolerant abilities in the sense that there exists a large functional network in a faulty star graph.
Resumo:
The title compound, potassium nickel(II) digallium tris-( phosphate) dihydrate, K[NiGa2(PO4)(3)(H2O)(2)], was synthesized hydrothermally. The structure is constructed from distorted trans-NiO4(H2O)2 octahedra linked through vertices and edges to GaO5 trigonal bipyramids and PO4 tetrahedra, forming a three-dimensional framework of formula [NiGa2(PO4)(3)(H2O)(2)](-). The K, Ni and one P atom lie on special positions (Wyckoff position 4e, site symmetry 2). There are two sets of channels within the framework, one running parallel to the [10 (1) over bar] direction and the other parallel to [001]. These intersect, forming a three-dimensional pore network in which the water molecules coordinated to the Ni atoms and the K+ ions required to charge balance the framework reside. The K+ ions lie in a highly distorted environment surrounded by ten O atoms, six of which are closer than 3.1 angstrom. The coordinated water molecules are within hydrogen-bonding distance to O atoms of bridging Ga-O-P groups.
Resumo:
The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.
Resumo:
This paper is turned to the advanced Monte Carlo methods for realistic image creation. It offers a new stratified approach for solving the rendering equation. We consider the numerical solution of the rendering equation by separation of integration domain. The hemispherical integration domain is symmetrically separated into 16 parts. First 9 sub-domains are equal size of orthogonal spherical triangles. They are symmetric each to other and grouped with a common vertex around the normal vector to the surface. The hemispherical integration domain is completed with more 8 sub-domains of equal size spherical quadrangles, also symmetric each to other. All sub-domains have fixed vertices and computable parameters. The bijections of unit square into an orthogonal spherical triangle and into a spherical quadrangle are derived and used to generate sampling points. Then, the symmetric sampling scheme is applied to generate the sampling points distributed over the hemispherical integration domain. The necessary transformations are made and the stratified Monte Carlo estimator is presented. The rate of convergence is obtained and one can see that the algorithm is of super-convergent type.
Resumo:
This paper is directed to the advanced parallel Quasi Monte Carlo (QMC) methods for realistic image synthesis. We propose and consider a new QMC approach for solving the rendering equation with uniform separation. First, we apply the symmetry property for uniform separation of the hemispherical integration domain into 24 equal sub-domains of solid angles, subtended by orthogonal spherical triangles with fixed vertices and computable parameters. Uniform separation allows to apply parallel sampling scheme for numerical integration. Finally, we apply the stratified QMC integration method for solving the rendering equation. The superiority our QMC approach is proved.
Resumo:
In this paper, we introduce two kinds of graphs: the generalized matching networks (GMNs) and the recursive generalized matching networks (RGMNs). The former generalize the hypercube-like networks (HLNs), while the latter include the generalized cubes and the star graphs. We prove that a GMN on a family of k-connected building graphs is -connected. We then prove that a GMN on a family of Hamiltonian-connected building graphs having at least three vertices each is Hamiltonian-connected. Our conclusions generalize some previously known results.
Resumo:
Generalized cubes are a subclass of hypercube-like networks, which include some hypercube variants as special cases. Let theta(G)(k) denote the minimum number of nodes adjacent to a set of k vertices of a graph G. In this paper, we prove theta(G)(k) >= -1/2k(2) + (2n - 3/2)k - (n(2) - 2) for each n-dimensional generalized cube and each integer k satisfying n + 2 <= k <= 2n. Our result is an extension of a result presented by Fan and Lin [J. Fan, X. Lin, The t/k-diagnosability of the BC graphs, IEEE Trans. Comput. 54 (2) (2005) 176-184]. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
We introduce a model for a pair of nonlinear evolving networks, defined over a common set of vertices, sub ject to edgewise competition. Each network may grow new edges spontaneously or through triad closure. Both networks inhibit the other’s growth and encourage the other’s demise. These nonlinear stochastic competition equations yield to a mean field analysis resulting in a nonlinear deterministic system. There may be multiple equilibria; and bifurcations of different types are shown to occur within a reduced parameter space. This situation models competitive peer-to-peer communication networks such as BlackBerry Messenger displacing SMS; or instant messaging displacing emails.
Resumo:
The o-palladated, chloro-bridged dimers [Pd{2-phenylpyridine(-H)}-μ-Cl]2 and [Pd{N,N-dimethylbenzylamine(-H)}-μ-Cl]2 react with cyanuric acid in the presence of base to afford closed, chiral cage-molecules in which twelve organo-Pd(II) centers, located in pairs at the vertices of an octahedron, are linked by four tetrahedrally-arranged cyanurato(3-) ligands. Incomplete (Pd10) cages, having structures derived from the corresponding Pd12 cages by replacing one pair of organopalladium centers with two protons, have also been isolated. Reaction of [Pd{2-phenylpyridine(-H)}-μ-Cl]2 with trithiocyanuric acid gives an entirely different and more open type of cage-complex, comprising only nine organopalladium centers and three thiocyanurato(3-) ligands: cage-closure in this latter system appears to be inhibited by steric crowding of the thiocarbonyl groups.
Resumo:
In this paper we consider the structure of dynamically evolving networks modelling information and activity moving across a large set of vertices. We adopt the communicability concept that generalizes that of centrality which is defined for static networks. We define the primary network structure within the whole as comprising of the most influential vertices (both as senders and receivers of dynamically sequenced activity). We present a methodology based on successive vertex knockouts, up to a very small fraction of the whole primary network,that can characterize the nature of the primary network as being either relatively robust and lattice-like (with redundancies built in) or relatively fragile and tree-like (with sensitivities and few redundancies). We apply these ideas to the analysis of evolving networks derived from fMRI scans of resting human brains. We show that the estimation of performance parameters via the structure tests of the corresponding primary networks is subject to less variability than that observed across a very large population of such scans. Hence the differences within the population are significant.
Resumo:
We are looking into variants of a domination set problem in social networks. While randomised algorithms for solving the minimum weighted domination set problem and the minimum alpha and alpha-rate domination problem on simple graphs are already present in the literature, we propose here a randomised algorithm for the minimum weighted alpha-rate domination set problem which is, to the best of our knowledge, the first such algorithm. A theoretical approximation bound based on a simple randomised rounding technique is given. The algorithm is implemented in Python and applied to a UK Twitter mentions networks using a measure of individuals’ influence (klout) as weights. We argue that the weights of vertices could be interpreted as the costs of getting those individuals on board for a campaign or a behaviour change intervention. The minimum weighted alpha-rate dominating set problem can therefore be seen as finding a set that minimises the total cost and each individual in a network has at least alpha percentage of its neighbours in the chosen set. We also test our algorithm on generated graphs with several thousand vertices and edges. Our results on this real-life Twitter networks and generated graphs show that the implementation is reasonably efficient and thus can be used for real-life applications when creating social network based interventions, designing social media campaigns and potentially improving users’ social media experience.