25 resultados para GRAPHS
em Cambridge University Engineering Department Publications Database
Resumo:
A group of mobile robots can localize cooperatively, using relative position and absolute orientation measurements, fused through an extended Kalman filter (ekf). The topology of the graph of relative measurements is known to affect the steady-state value of the position error covariance matrix. Classes of sensor graphs are identified, for which tight bounds for the trace of the covariance matrix can be obtained based on the algebraic properties of the underlying relative measurement graph. The string and the star graph topologies are considered, and the explicit form of the eigenvalues of error covariance matrix is given. More general sensor graph topologies are considered as combinations of the string and star topologies, when additional edges are added. It is demonstrated how the addition of edges increases the trace of the steady-state value of the position error covariance matrix, and the theoretical predictions are verified through simulation analysis.
Resumo:
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).
Resumo:
A fundamental problem in the analysis of structured relational data like graphs, networks, databases, and matrices is to extract a summary of the common structure underlying relations between individual entities. Relational data are typically encoded in the form of arrays; invariance to the ordering of rows and columns corresponds to exchangeable arrays. Results in probability theory due to Aldous, Hoover and Kallenberg show that exchangeable arrays can be represented in terms of a random measurable function which constitutes the natural model parameter in a Bayesian model. We obtain a flexible yet simple Bayesian nonparametric model by placing a Gaussian process prior on the parameter function. Efficient inference utilises elliptical slice sampling combined with a random sparse approximation to the Gaussian process. We demonstrate applications of the model to network data and clarify its relation to models in the literature, several of which emerge as special cases.
Resumo:
We offer a solution to the problem of efficiently translating algorithms between different types of discrete statistical model. We investigate the expressive power of three classes of model-those with binary variables, with pairwise factors, and with planar topology-as well as their four intersections. We formalize a notion of "simple reduction" for the problem of inferring marginal probabilities and consider whether it is possible to "simply reduce" marginal inference from general discrete factor graphs to factor graphs in each of these seven subclasses. We characterize the reducibility of each class, showing in particular that the class of binary pairwise factor graphs is able to simply reduce only positive models. We also exhibit a continuous "spectral reduction" based on polynomial interpolation, which overcomes this limitation. Experiments assess the performance of standard approximate inference algorithms on the outputs of our reductions.
Resumo:
The low-density parity check codes whose performance is closest to the Shannon limit are `Gallager codes' based on irregular graphs. We compare alternative methods for constructing these graphs and present two results. First, we find a `super-Poisson' construction which gives a small improvement in empirical performance over a random construction. Second, whereas Gallager codes normally take N2 time to encode, we investigate constructions of regular and irregular Gallager codes that allow more rapid encoding and have smaller memory requirements in the encoder. We find that these `fast encoding' Gallager codes have equally good performance.
Resumo:
We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. © 2004 Elsevier B.V.
Resumo:
We report weaknesses in two algebraic constructions of low-density parity-check codes based on expander graphs. The Margulis construction gives a code with near-codewords, which cause problems for the sum-product decoder; The Ramanujan-Margulis construction gives a code with low-weight codewords, which produce an error-floor. ©2003 Published by Elsevier Science B. V.
Resumo:
A report on characterisation of technology roadmaps, its purpose and formats are presented. A fast-start process is developed to support the initiation of technology roadmapping in firms to address the industrial needs. The purpose of each roadmap is related to a number of planning aims: product, capability, integration, strategic, long-range, programme and process planning. The second set of categories are related to the format of the roadmap, based on observed structure: multiple or single layers, bars, tables, graphs, pictorial forms, flow diagrams and text. It is concluded that technology roadmaps processes a great potential for supporting the development and implementation of business product and technology strategy.
Resumo:
Targets to cut 2050 CO2 emissions in the steel and aluminium sectors by 50%, whilst demand is expected to double, cannot be met by energy efficiency measures alone, so options that reduce total demand for liquid metal production must also be considered. Such reductions could occur through reduced demand for final goods (for instance by life extension), reduced demand for material use in each product (for instance by lightweight design) or reduced demand for material to make existing products. The last option, improving the yield of manufacturing processes from liquid metal to final product, is attractive in being invisible to the final customer, but has had little attention to date. Accordingly this paper aims to provide an estimate of the potential to make existing products with less liquid metal production. Yield ratios have been measured for five case study products, through a series of detailed factory visits, along each supply chain. The results of these studies, presented on graphs of cumulative energy against yield, demonstrate how the embodied energy in final products may be up to 15 times greater than the energy required to make liquid metal, due to yield losses. A top-down evaluation of the global flows of steel and aluminium showed that 26% of liquid steel and 41% of liquid aluminium produced does not make it into final products, but is diverted as process scrap and recycled. Reducing scrap substitutes production by recycling and could reduce total energy use by 17% and 6% and total CO 2 emissions by 16% and 7% for the steel and aluminium industries respectively, using forming and fabrication energy values from the case studies. The abatement potential of process scrap elimination is similar in magnitude to worldwide implementation of best available standards of energy efficiency and demonstrates how decreasing the recycled content may sometimes result in emission reductions. Evidence from the case studies suggests that whilst most companies are aware of their own yield ratios, few, if any, are fully aware of cumulative losses along their whole supply chain. Addressing yield losses requires this awareness to motivate collaborative approaches to improvement. © 2011 Elsevier B.V. All rights reserved.
Resumo:
We consider the general problem of constructing nonparametric Bayesian models on infinite-dimensional random objects, such as functions, infinite graphs or infinite permutations. The problem has generated much interest in machine learning, where it is treated heuristically, but has not been studied in full generality in non-parametric Bayesian statistics, which tends to focus on models over probability distributions. Our approach applies a standard tool of stochastic process theory, the construction of stochastic processes from their finite-dimensional marginal distributions. The main contribution of the paper is a generalization of the classic Kolmogorov extension theorem to conditional probabilities. This extension allows a rigorous construction of nonparametric Bayesian models from systems of finite-dimensional, parametric Bayes equations. Using this approach, we show (i) how existence of a conjugate posterior for the nonparametric model can be guaranteed by choosing conjugate finite-dimensional models in the construction, (ii) how the mapping to the posterior parameters of the nonparametric model can be explicitly determined, and (iii) that the construction of conjugate models in essence requires the finite-dimensional models to be in the exponential family. As an application of our constructive framework, we derive a model on infinite permutations, the nonparametric Bayesian analogue of a model recently proposed for the analysis of rank data.
Resumo:
An infinite series of twofold, two-way weavings of the cube, corresponding to 'wrappings', or double covers of the cube, is described with the aid of the two-parameter Goldberg- Coxeter construction. The strands of all such wrappings correspond to the central circuits (CCs) of octahedrites (four-regular polyhedral graphs with square and triangular faces), which for the cube necessarily have octahedral symmetry. Removing the symmetry constraint leads to wrappings of other eight-vertex convex polyhedra. Moreover, wrappings of convex polyhedra with fewer vertices can be generated by generalizing from octahedrites to i-hedrites, which additionally include digonal faces. When the strands of a wrapping correspond to the CCs of a four-regular graph that includes faces of size greater than 4, non-convex 'crinkled' wrappings are generated. The various generalizations have implications for activities as diverse as the construction of woven-closed baskets and the manufacture of advanced composite components of complex geometry. © 2012 The Royal Society.