302 resultados para Efficient dominating set
em Indian Institute of Science - Bangalore - Índia
Resumo:
The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.
Resumo:
The rainbow connection number of a connected graph is the minimum number of colors needed to color its edges, so that every pair of its vertices is connected by at least one path in which no two edges are colored the same. In this article we show that for every connected graph on n vertices with minimum degree delta, the rainbow connection number is upper bounded by 3n/(delta + 1) + 3. This solves an open problem from Schiermeyer (Combinatorial Algorithms, Springer, Berlin/Hiedelberg, 2009, pp. 432437), improving the previously best known bound of 20n/delta (J Graph Theory 63 (2010), 185191). This bound is tight up to additive factors by a construction mentioned in Caro et al. (Electr J Combin 15(R57) (2008), 1). As an intermediate step we obtain an upper bound of 3n/(delta + 1) - 2 on the size of a connected two-step dominating set in a connected graph of order n and minimum degree d. This bound is tight up to an additive constant of 2. This result may be of independent interest. We also show that for every connected graph G with minimum degree at least 2, the rainbow connection number, rc(G), is upper bounded by Gc(G) + 2, where Gc(G) is the connected domination number of G. Bounds of the form diameter(G)?rc(G)?diameter(G) + c, 1?c?4, for many special graph classes follow as easy corollaries from this result. This includes interval graphs, asteroidal triple-free graphs, circular arc graphs, threshold graphs, and chain graphs all with minimum degree delta at least 2 and connected. We also show that every bridge-less chordal graph G has rc(G)?3.radius(G). In most of these cases, we also demonstrate the tightness of the bounds.
Resumo:
For a fixed positive integer k, a k-tuple total dominating set of a graph G = (V. E) is a subset T D-k of V such that every vertex in V is adjacent to at least k vertices of T Dk. In minimum k-tuple total dominating set problem (MIN k-TUPLE TOTAL DOM SET), it is required to find a k-tuple total dominating set of minimum cardinality and DECIDE MIN k-TUPLE TOTAL DOM SET is the decision version of MIN k-TUPLE TOTAL DOM SET problem. In this paper, we show that DECIDE MIN k-TUPLE TOTAL DOM SET is NP-complete for split graphs, doubly chordal graphs and bipartite graphs. For chordal bipartite graphs, we show that MIN k-TUPLE TOTAL DOM SET can be solved in polynomial time. We also propose some hardness results and approximation algorithms for MIN k-TUPLE TOTAL DOM SET problem. (c) 2012 Elsevier B.V. All rights reserved.
Resumo:
Suppose G = (V, E) is a simple graph and k is a fixed positive integer. A subset D subset of V is a distance k-dominating set of G if for every u is an element of V. there exists a vertex v is an element of D such that d(G)(u, v) <= k, where d(G)(u, v) is the distance between u and v in G. A set D subset of V is a distance k-paired-dominating set of G if D is a distance k-dominating set and the induced subgraph GD] contains a perfect matching. Given a graph G = (V, E) and a fixed integer k > 0, the MIN DISTANCE k-PAIRED-DOM SET problem is to find a minimum cardinality distance k-paired-dominating set of G. In this paper, we show that the decision version of MIN DISTANCE k-PAIRED-DOM SET iS NP-complete for undirected path graphs. This strengthens the complexity of decision version Of MIN DISTANCE k-PAIRED-DOM SET problem in chordal graphs. We show that for a given graph G, unless NP subset of DTIME (n(0)((log) (log) (n)) MIN DISTANCE k-PAIRED-Dom SET problem cannot be approximated within a factor of (1 -epsilon ) In n for any epsilon > 0, where n is the number of vertices in G. We also show that MIN DISTANCE k-PAIRED-DOM SET problem is APX-complete for graphs with degree bounded by 3. On the positive side, we present a linear time algorithm to compute the minimum cardinality of a distance k-paired-dominating set of a strongly chordal graph G if a strong elimination ordering of G is provided. We show that for a given graph G, MIN DISTANCE k-PAIRED-DOM SET problem can be approximated with an approximation factor of 1 + In 2 + k . In(Delta(G)), where Delta(G) denotes the maximum degree of G. (C) 2012 Elsevier B.V All rights reserved.
Resumo:
Wireless adhoc networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem - the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node. In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels, where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information (CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a c- - orresponding factored class of control poli.
Resumo:
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.
Resumo:
The maximum independent set problem is NP-complete even when restricted to planar graphs, cubic planar graphs or triangle free graphs. The problem of finding an absolute approximation still remains NP-complete. Various polynomial time approximation algorithms, that guarantee a fixed worst case ratio between the independent set size obtained to the maximum independent set size, in planar graphs have been proposed. We present in this paper a simple and efficient, O(|V|) algorithm that guarantees a ratio 1/2, for planar triangle free graphs. The algorithm differs completely from other approaches, in that, it collects groups of independent vertices at a time. Certain bounds we obtain in this paper relate to some interesting questions in the theory of extremal graphs.
Resumo:
Bluetooth is a short-range radio technology operating in the unlicensed industrial-scientific-medical (ISM) band at 2.45 GHz. A piconet is basically a collection of slaves controlled by a master. A scatternet, on the other hand, is established by linking several piconets together in an ad hoc fashion to yield a global wireless ad hoc network. This paper proposes a scheduling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in bluetooth piconets and scatternets. We propose a novel algorithm for scheduling slots to slaves for both piconets and scatternets using multi-layered parameterized policies. Our scheduling scheme works with real data and obtains an optimal feedback policy within prescribed parameterized classes of these by using an efficient two-timescale simultaneous perturbation stochastic approximation (SPSA) algorithm. We show the convergence of our algorithm to an optimal multi-layered policy. We also propose novel polling schemes for intra- and inter-piconet scheduling that are seen to perform well. We present an extensive set of simulation results and performance comparisons with existing scheduling algorithms. Our results indicate that our proposed scheduling algorithm performs better overall on a wide range of experiments over the existing algorithms for both piconets (Das et al. in INFOCOM, pp. 591–600, 2001; Lapeyrie and Turletti in INFOCOM conference proceedings, San Francisco, US, 2003; Shreedhar and Varghese in SIGCOMM, pp. 231–242, 1995) and scatternets (Har-Shai et al. in OPNETWORK, 2002; Saha and Matsumot in AICT/ICIW, 2006; Tan and Guttag in The 27th annual IEEE conference on local computer networks(LCN). Tampa, 2002). Our studies also confirm that our proposed scheme achieves a high throughput and low packet delays with reasonable fairness among all the connections.
Resumo:
Bluetooth is an emerging standard in short range, low cost and low power wireless networks. MAC is a generic polling based protocol, where a central Bluetooth unit (master) determines channel access to all other nodes (slaves) in the network (piconet). An important problem in Bluetooth is the design of efficient scheduling protocols. This paper proposes a polling policy that aims to achieve increased system throughput and reduced packet delays while providing reasonably good fairness among all traffic flows in a Bluetooth Piconet. We present an extensive set of simulation results and performance comparisons with two important existing algorithms. Our results indicate that our proposed scheduling algorithm outperforms the Round Robin scheduling algorithm by more than 40% in all cases tried. Our study also confirms that our proposed policy achieves higher throughput and lower packet delays with reasonable fairness among all the connections.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.
Resumo:
Combining the advanced techniques of optimal dynamic inversion and model-following neuro-adaptive control design, an efficient technique is presented for effective treatment of chronic myelogenous leukemia (CML). A recently developed nonlinear mathematical model for cell dynamics is used for the control (medication) synthesis. First, taking a set of nominal parameters, a nominal controller is designed based on the principle of optimal dynamic inversion. This controller can treat nominal patients (patients having same nominal parameters as used for the control design) effectively. However, since the parameters of an actual patient can be different from that of the ideal patient, to make the treatment strategy more effective and efficient, a model-following neuro-adaptive controller is augmented to the nominal controller. In this approach, a neural network trained online (based on Lyapunov stability theory) facilitates a new adaptive controller, computed online. From the simulation studies, this adaptive control design approach (treatment strategy) is found to be very effective to treat the CML disease for actual patients. Sufficient generality is retained in the theoretical developments in this paper, so that the techniques presented can be applied to other similar problem as well. Note that the technique presented is computationally non-intensive and all computations can be carried out online.
Resumo:
In this paper, we present a wavelet - based approach to solve the non-linear perturbation equation encountered in optical tomography. A particularly suitable data gathering geometry is used to gather a data set consisting of differential changes in intensity owing to the presence of the inhomogeneous regions. With this scheme, the unknown image, the data, as well as the weight matrix are all represented by wavelet expansions, thus yielding the representation of the original non - linear perturbation equation in the wavelet domain. The advantage in use of the non-linear perturbation equation is that there is no need to recompute the derivatives during the entire reconstruction process. Once the derivatives are computed, they are transformed into the wavelet domain. The purpose of going to the wavelet domain, is that, it has an inherent localization and de-noising property. The use of approximation coefficients, without the detail coefficients, is ideally suited for diffuse optical tomographic reconstructions, as the diffusion equation removes most of the high frequency information and the reconstruction appears low-pass filtered. We demonstrate through numerical simulations, that through solving merely the approximation coefficients one can reconstruct an image which has the same information content as the reconstruction from a non-waveletized procedure. In addition we demonstrate a better noise tolerance and much reduced computation time for reconstructions from this approach.
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.