206 resultados para Extremal graphs
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.
Resumo:
A $k$-box $B=(R_1,...,R_k)$, where each $R_i$ is a closed interval on the real line, is defined to be the Cartesian product $R_1\times R_2\times ...\times R_k$. If each $R_i$ is a unit length interval, we call $B$ a $k$-cube. Boxicity of a graph $G$, denoted as $\boxi(G)$, is the minimum integer $k$ such that $G$ is an intersection graph of $k$-boxes. Similarly, the cubicity of $G$, denoted as $\cubi(G)$, is the minimum integer $k$ such that $G$ is an intersection graph of $k$-cubes. It was shown in [L. Sunil Chandran, Mathew C. Francis, and Naveen Sivadasan: Representing graphs as the intersection of axis-parallel cubes. MCDES-2008, IISc Centenary Conference, available at CoRR, abs/cs/ 0607092, 2006.] that, for a graph $G$ with maximum degree $\Delta$, $\cubi(G)\leq \lceil 4(\Delta +1)\log n\rceil$. In this paper, we show that, for a $k$-degenerate graph $G$, $\cubi(G) \leq (k+2) \lceil 2e \log n \rceil$. Since $k$ is at most $\Delta$ and can be much lower, this clearly is a stronger result. This bound is tight. We also give an efficient deterministic algorithm that runs in $O(n^2k)$ time to output a $8k(\lceil 2.42 \log n\rceil + 1)$ dimensional cube representation for $G$. An important consequence of the above result is that if the crossing number of a graph $G$ is $t$, then $\boxi(G)$ is $O(t^{1/4}{\lceil\log t\rceil}^{3/4})$ . This bound is tight up to a factor of $O((\log t)^{1/4})$. We also show that, if $G$ has $n$ vertices, then $\cubi(G)$ is $O(\log n + t^{1/4}\log t)$. Using our bound for the cubicity of $k$-degenerate graphs we show that cubicity of almost all graphs in $\mathcal{G}(n,m)$ model is $O(d_{av}\log n)$, where $d_{av}$ denotes the average degree of the graph under consideration. model is O(davlogn).
Resumo:
We provide new analytical results concerning the spread of information or influence under the linear threshold social network model introduced by Kempe et al. in, in the information dissemination context. The seeder starts by providing the message to a set of initial nodes and is interested in maximizing the number of nodes that will receive the message ultimately. A node's decision to forward the message depends on the set of nodes from which it has received the message. Under the linear threshold model, the decision to forward the information depends on the comparison of the total influence of the nodes from which a node has received the packet with its own threshold of influence. We derive analytical expressions for the expected number of nodes that receive the message ultimately, as a function of the initial set of nodes, for a generic network. We show that the problem can be recast in the framework of Markov chains. We then use the analytical expression to gain insights into information dissemination in some simple network topologies such as the star, ring, mesh and on acyclic graphs. We also derive the optimal initial set in the above networks, and also hint at general heuristics for picking a good initial set.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
Periodic-finite-type shifts (PFT's) are sofic shifts which forbid the appearance of finitely many pre-specified words in a periodic manner. The class of PFT's strictly includes the class of shifts of finite type (SFT's). The zeta function of a PET is a generating function for the number of periodic sequences in the shift. For a general sofic shift, there exists a formula, attributed to Manning and Bowen, which computes the zeta function of the shift from certain auxiliary graphs constructed from a presentation of the shift. In this paper, we derive an interesting alternative formula computable from certain ``word-based graphs'' constructed from the periodically-forbidden word description of the PET. The advantages of our formula over the Manning-Bowen formula are discussed.
Resumo:
Let where be a set of points in d-dimensional space with a given metric rho. For a point let r (p) be the distance of p with respect to rho from its nearest neighbor in Let B(p,r (p) ) be the open ball with respect to rho centered at p and having the radius r (p) . We define the sphere-of-influence graph (SIG) of as the intersection graph of the family of sets Given a graph G, a set of points in d-dimensional space with the metric rho is called a d-dimensional SIG-representation of G, if G is isomorphic to the SIG of It is known that the absence of isolated vertices is a necessary and sufficient condition for a graph to have a SIG-representation under the L (a)-metric in some space of finite dimension. The SIG-dimension under the L (a)-metric of a graph G without isolated vertices is defined to be the minimum positive integer d such that G has a d-dimensional SIG-representation under the L (a)-metric. It is denoted by SIG (a)(G). We study the SIG-dimension of trees under the L (a)-metric and almost completely answer an open problem posed by Michael and Quint (Discrete Appl Math 127:447-460, 2003). Let T be a tree with at least two vertices. For each let leaf-degree(v) denote the number of neighbors of v that are leaves. We define the maximum leaf-degree as leaf-degree(x). Let leaf-degree{(v) = alpha}. If |S| = 1, we define beta(T) = alpha(T) - 1. Otherwise define beta(T) = alpha(T). We show that for a tree where beta = beta (T), provided beta is not of the form 2 (k) - 1, for some positive integer k a parts per thousand yen 1. If beta = 2 (k) - 1, then We show that both values are possible.
Resumo:
We consider extremal limits of the recently constructed ``subtracted geometry''. We show that extremality makes the horizon attractive against scalar perturbations, but radial evolution of such perturbations changes the asymptotics: from a conical-box to flat Minkowski. Thus these are black holes that retain their near-horizon geometry under perturbations that drastically change their asymptotics. We also show that this extremal subtracted solution (''subttractor'') can arise as a boundary of the basin of attraction for flat space attractors. We demonstrate this by using a fairly minimal action (that has connections with STU model) where the equations of motion are integrable and we are able to find analytic solutions that capture the flow from the horizon to the asymptotic region. The subttractor is a boundary between two qualitatively different flows. We expect that these results have generalizations for other theories with charged dilatonic black holes.
Resumo:
The von Neumann entropy of a generic quantum state is not unique unless the state can be uniquely decomposed as a sum of extremal or pure states. As pointed out to us by Sorkin, this happens if the GNS representation (of the algebra of observables in some quantum state) is reducible, and some representations in the decomposition occur with non-trivial degeneracy. This non-unique entropy can occur at zero temperature. We will argue elsewhere in detail that the degeneracies in the GNS representation can be interpreted as an emergent broken gauge symmetry, and play an important role in the analysis of emergent entropy due to non-Abelian anomalies. Finally, we establish the analogue of an H-theorem for this entropy by showing that its evolution is Markovian, determined by a stochastic matrix.
Resumo:
The boxicity (cubicity) of a graph G, denoted by box(G) (respectively cub(G)), is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (cubes) in ℝ k . The problem of computing boxicity (cubicity) is known to be inapproximable in polynomial time even for graph classes like bipartite, co-bipartite and split graphs, within an O(n 0.5 − ε ) factor for any ε > 0, unless NP = ZPP. We prove that if a graph G on n vertices has a clique on n − k vertices, then box(G) can be computed in time n22O(k2logk) . Using this fact, various FPT approximation algorithms for boxicity are derived. The parameter used is the vertex (or edge) edit distance of the input graph from certain graph families of bounded boxicity - like interval graphs and planar graphs. Using the same fact, we also derive an O(nloglogn√logn√) factor approximation algorithm for computing boxicity, which, to our knowledge, is the first o(n) factor approximation algorithm for the problem. We also present an FPT approximation algorithm for computing the cubicity of graphs, with vertex cover number as the parameter.
Resumo:
The ability to perform strong updates is the main contributor to the precision of flow-sensitive pointer analysis algorithms. Traditional flow-sensitive pointer analyses cannot strongly update pointers residing in the heap. This is a severe restriction for Java programs. In this paper, we propose a new flow-sensitive pointer analysis algorithm for Java that can perform strong updates on heap-based pointers effectively. Instead of points-to graphs, we represent our points-to information as maps from access paths to sets of abstract objects. We have implemented our analysis and run it on several large Java benchmarks. The results show considerable improvement in precision over the points-to graph based flow-insensitive and flow-sensitive analyses, with reasonable running time.
Resumo:
The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.
Resumo:
We analytically study the role played by the network topology in sustaining cooperation in a society of myopic agents in an evolutionary setting. In our model, each agent plays the Prisoner's Dilemma (PD) game with its neighbors, as specified by a network. Cooperation is the incumbent strategy, whereas defectors are the mutants. Starting with a population of cooperators, some agents are switched to defection. The agents then play the PD game with their neighbors and compute their fitness. After this, an evolutionary rule, or imitation dynamic is used to update the agent strategy. A defector switches back to cooperation if it has a cooperator neighbor with higher fitness. The network is said to sustain cooperation if almost all defectors switch to cooperation. Earlier work on the sustenance of cooperation has largely consisted of simulation studies, and we seek to complement this body of work by providing analytical insight for the same. We find that in order to sustain cooperation, a network should satisfy some properties such as small average diameter, densification, and irregularity. Real-world networks have been empirically shown to exhibit these properties, and are thus candidates for the sustenance of cooperation. We also analyze some specific graphs to determine whether or not they sustain cooperation. In particular, we find that scale-free graphs belonging to a certain family sustain cooperation, whereas Erdos-Renyi random graphs do not. To the best of our knowledge, ours is the first analytical attempt to determine which networks sustain cooperation in a population of myopic agents in an evolutionary setting.
Resumo:
We study the basin of attraction of static extremal black holes, in the concrete setting of the STU model. By finding a connection to a decoupled Toda-like system and solving it exactly, we find a simple way to characterize the attraction basin via competing behaviors of certain parameters. The boundaries of attraction arise in the various limits where these parameters degenerate to zero. We find that these boundaries are generalizations of the recently introduced (extremal) subtracted geometry: the warp factors still exhibit asymptotic integer power law behaviors, but the powers can be different from one. As we cross over one of these boundaries ('generalized subttractors'), the solutions turn unstable and start blowing up at finite radius and lose their asymptotic region. Our results are fully analytic, but we also solve a simpler theory where the attraction basin is lower dimensional and easy to visualize, and present a simple picture that illustrates many of the basic ideas.
Resumo:
The temperature dependent electrical properties of the dropcasted Cu2SnS3 films have been measured in the temperature range 140 K to 317 K. The log I versus root V plot shows two regions. The region at lower bias is due to electrode limited Schottky emission and the higher bias region is due to bulk limited Poole Frenkel emission. The ideality factor is calculated from the ln I versus V plot for different temperatures fitted with the thermionic emission model and is found to vary from 6.05 eV to 12.23 eV. This large value is attributed to the presence of defects or amorphous layer at the Ag / Cu2SnS3 interface. From the Richardson's plot the Richardson's constant and the barrier height were calculated. Owing to the inhomogeneity in the barrier heights, the Richardson's constant and the barrier height were also calculated from the modified Richardson's plot. The I-V-T curves were also fitted using the thermionic field emission model. The barrier heights were found to be higher than those calculated using thermionic emission model. From the fit of the I-V-T curves to the field emission model, field emission was seen to dominate in the low temperature range of 140 K to 177 K. The temperature dependent current graphs show two regions of different mechanisms. The log I versus 1000/T plot gives activation energies E-a1 = 0.367095 - 0.257682 eV and E-a2 = 0.038416 - 0.042452 eV. The log ( I/T-2) versus 1000/T graph gives trap depths Phi(o1) = 0.314159 - 0.204752 eV and Phi(o2) = 0.007425- 0.011163 eV. With increasing voltage the activation energy E-a1 and the trap depth Phi(o1) decrease. From the ln (IT1/ 2) versus 1/T-1/ 4 graph, the low temperature region is due to variable range hopping mechanism and the high temperature region is due to thermionic emission. (C) 2014 Author(s).