972 resultados para Hamiltonian Graph
Resumo:
The problem of vertex coloring in random graphs is studied using methods of statistical physics and probability. Our analytical results are compared to those obtained by exact enumeration and Monte Carlo simulations. We critically discuss the merits and shortcomings of the various methods, and interpret the results obtained. We present an exact analytical expression for the two-coloring problem as well as general replica symmetric approximated solutions for the thermodynamics of the graph coloring problem with p colors and K-body edges. ©2002 The American Physical Society.
Resumo:
This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.
Resumo:
In this paper we propose an approach based on self-interested autonomous cameras, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to grow the vision graph during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online which permits the addition and removal cameras to the network during runtime and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multi-camera calibration can be avoided. © 2011 IEEE.
Resumo:
In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.
Resumo:
Graph embedding is a general framework for subspace learning. However, because of the well-known outlier-sensitiveness disadvantage of the L2-norm, conventional graph embedding is not robust to outliers which occur in many practical applications. In this paper, an improved graph embedding algorithm (termed LPP-L1) is proposed by replacing L2-norm with L1-norm. In addition to its robustness property, LPP-L1 avoids small sample size problem. Experimental results on both synthetic and real-world data demonstrate these advantages. © 2009 Elsevier B.V. All rights reserved.
Resumo:
We have been investigating the cryptographical properties of in nite families of simple graphs of large girth with the special colouring of vertices during the last 10 years. Such families can be used for the development of cryptographical algorithms (on symmetric or public key modes) and turbocodes in error correction theory. Only few families of simple graphs of large unbounded girth and arbitrarily large degree are known. The paper is devoted to the more general theory of directed graphs of large girth and their cryptographical applications. It contains new explicit algebraic constructions of in finite families of such graphs. We show that they can be used for the implementation of secure and very fast symmetric encryption algorithms. The symbolic computations technique allow us to create a public key mode for the encryption scheme based on algebraic graphs.
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
We prove that in quadratic perturbations of generic Hamiltonian vector fields with two saddle points and one center there can appear at most two limit cycles. This bound is exact.
Resumo:
The ability of automatic graphic user interface construction is described. It is based on the building of user interface as reflection of the data domain logical definition. The submitted approach to development of the information system user interface enables dynamic adaptation of the system during their operation. This approach is used for creation of information systems based on CASE-system METAS.
Resumo:
The parallel resolution procedures based on graph structures method are presented. OR-, AND- and DCDP- parallel inference on connection graph representation is explored and modifications to these algorithms using heuristic estimation are proposed. The principles for designing these heuristic functions are thoroughly discussed. The colored clause graphs resolution principle is presented. The comparison of efficiency (on the Steamroller problem) is carried out and the results are presented. The parallel unification algorithm used in the parallel inference procedure is briefly outlined in the final part of the paper.
Resumo:
Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.
Resumo:
One of the ultimate aims of Natural Language Processing is to automate the analysis of the meaning of text. A fundamental step in that direction consists in enabling effective ways to automatically link textual references to their referents, that is, real world objects. The work presented in this paper addresses the problem of attributing a sense to proper names in a given text, i.e., automatically associating words representing Named Entities with their referents. The method for Named Entity Disambiguation proposed here is based on the concept of semantic relatedness, which in this work is obtained via a graph-based model over Wikipedia. We show that, without building the traditional bag of words representation of the text, but instead only considering named entities within the text, the proposed method achieves results competitive with the state-of-the-art on two different datasets.