250 resultados para graph matching algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the parameterized complexity of the following edge coloring problem motivated by the problem of channel assignment in wireless networks. For an integer q >= 2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. This problem is NP-hard for q >= 2, and has been well-studied from the point of view of approximation. Our main focus is the case when q = 2, which is already theoretically intricate and practically relevant. We show fixed-parameter tractable algorithms for both the standard and the dual parameter, and for the latter problem, the result is based on a linear vertex kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The correlation clustering problem is a fundamental problem in both theory and practice, and it involves identifying clusters of objects in a data set based on their similarity. A traditional modeling of this question as a graph theoretic problem involves associating vertices with data points and indicating similarity by adjacency. Clusters then correspond to cliques in the graph. The resulting optimization problem, Cluster Editing (and several variants) are very well-studied algorithmically. In many situations, however, translating clusters to cliques can be somewhat restrictive. A more flexible notion would be that of a structure where the vertices are mutually ``not too far apart'', without necessarily being adjacent. One such generalization is realized by structures called s-clubs, which are graphs of diameter at most s. In this work, we study the question of finding a set of at most k edges whose removal leaves us with a graph whose components are s-clubs. Recently, it has been shown that unless Exponential Time Hypothesis fail (ETH) fails Cluster Editing (whose components are 1-clubs) does not admit sub-exponential time algorithm STACS, 2013]. That is, there is no algorithm solving the problem in time 2 degrees((k))n(O(1)). However, surprisingly they show that when the number of cliques in the output graph is restricted to d, then the problem can be solved in time O(2(O(root dk)) + m + n). We show that this sub-exponential time algorithm for the fixed number of cliques is rather an exception than a rule. Our first result shows that assuming the ETH, there is no algorithm solving the s-Club Cluster Edge Deletion problem in time 2 degrees((k))n(O(1)). We show, further, that even the problem of deleting edges to obtain a graph with d s-clubs cannot be solved in time 2 degrees((k))n(O)(1) for any fixed s, d >= 2. This is a radical contrast from the situation established for cliques, where sub-exponential algorithms are known.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that every graph of maximum degree 3 can be represented as the intersection graph of axis parallel boxes in three dimensions, that is, every vertex can be mapped to an axis parallel box such that two boxes intersect if and only if their corresponding vertices are adjacent. In fact, we construct a representation in which any two intersecting boxes touch just at their boundaries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the first q-Gaussian smoothed functional (SF) estimator of the Hessian and the first Newton-based stochastic optimization algorithm that estimates both the Hessian and the gradient of the objective function using q-Gaussian perturbations. Our algorithm requires only two system simulations (regardless of the parameter dimension) and estimates both the gradient and the Hessian at each update epoch using these. We also present a proof of convergence of the proposed algorithm. In a related recent work (Ghoshdastidar, Dukkipati, & Bhatnagar, 2014), we presented gradient SF algorithms based on the q-Gaussian perturbations. Our work extends prior work on SF algorithms by generalizing the class of perturbation distributions as most distributions reported in the literature for which SF algorithms are known to work turn out to be special cases of the q-Gaussian distribution. Besides studying the convergence properties of our algorithm analytically, we also show the results of numerical simulations on a model of a queuing network, that illustrate the significance of the proposed method. In particular, we observe that our algorithm performs better in most cases, over a wide range of q-values, in comparison to Newton SF algorithms with the Gaussian and Cauchy perturbations, as well as the gradient q-Gaussian SF algorithms. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developing countries constantly face the challenge of reliably matching electricity supply to increasing consumer demand. The traditional policy decisions of increasing supply and reducing demand centrally, by building new power plants and/or load shedding, have been insufficient. Locally installed microgrids along with consumer demand response can be suitable decentralized options to augment the centralized grid based systems and plug the demand-supply gap. The objectives of this paper are to: (1) develop a framework to identify the appropriate decentralized energy options for demand supply matching within a community, and, (2) determine which of these options can suitably plug the existing demand-supply gap at varying levels of grid unavailability. A scenario analysis framework is developed to identify and assess the impact of different decentralized energy options at a community level and demonstrated for a typical urban residential community Vijayanagar, Bangalore in India. A combination of LPG based CHP microgrid and proactive demand response by the community is the appropriate option that enables the Vijayanagar community to meet its energy needs 24/7 in a reliable, cost-effective manner. The paper concludes with an enumeration of the barriers and feasible strategies for the implementation of community microgrids in India based on stakeholder inputs. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a new Hessian estimator based on the simultaneous perturbation procedure, that requires three system simulations regardless of the parameter dimension. We then present two Newton-based simulation optimization algorithms that incorporate this Hessian estimator. The two algorithms differ primarily in the manner in which the Hessian estimate is used. Both our algorithms do not compute the inverse Hessian explicitly, thereby saving on computational effort. While our first algorithm directly obtains the product of the inverse Hessian with the gradient of the objective, our second algorithm makes use of the Sherman-Morrison matrix inversion lemma to recursively estimate the inverse Hessian. We provide proofs of convergence for both our algorithms. Next, we consider an interesting application of our algorithms on a problem of road traffic control. Our algorithms are seen to exhibit better performance than two Newton algorithms from a recent prior work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Proposing an image reconstruction technique, algebraic reconstruction technique-refraction correction (ART-rc). The proposed method takes care of refractive index mismatches present in gel dosimeter scanner at the boundary, and also corrects for the interior ray refraction. Polymer gel dosimeters with high dose regions have higher refractive index and optical density compared to the background medium, these changes in refractive index at high dose results in interior ray bending. Methods: The inclusion of the effects of refraction is an important step in reconstruction of optical density in gel dosimeters. The proposed ray tracing algorithm models the interior multiple refraction at the inhomogeneities. Jacob's ray tracing algorithm has been modified to calculate the pathlengths of the ray that traverses through the higher dose regions. The algorithm computes the length of the ray in each pixel along its path and is used as the weight matrix. Algebraic reconstruction technique and pixel based reconstruction algorithms are used for solving the reconstruction problem. The proposed method is tested with numerical phantoms for various noise levels. The experimental dosimetric results are also presented. Results: The results show that the proposed scheme ART-rc is able to reconstruct optical density inside the dosimeter better than the results obtained using filtered backprojection and conventional algebraic reconstruction approaches. The quantitative improvement using ART-rc is evaluated using gamma-index. The refraction errors due to regions of different refractive indices are discussed. The effects of modeling of interior refraction in the dose region are presented. Conclusions: The errors propagated due to multiple refraction effects have been modeled and the improvements in reconstruction using proposed model is presented. The refractive index of the dosimeter has a mismatch with the surrounding medium (for dry air or water scanning). The algorithm reconstructs the dose profiles by estimating refractive indices of multiple inhomogeneities having different refractive indices and optical densities embedded in the dosimeter. This is achieved by tracking the path of the ray that traverses through the dosimeter. Extensive simulation studies have been carried out and results are found to be matching that of experimental results. (C) 2015 American Association of Physicists in Medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To perform super resolution of low resolution images, state-of-the-art methods are based on learning a pair of lowresolution and high-resolution dictionaries from multiple images. These trained dictionaries are used to replace patches in lowresolution image with appropriate matching patches from the high-resolution dictionary. In this paper we propose using a single common image as dictionary, in conjunction with approximate nearest neighbour fields (ANNF) to perform super resolution (SR). By using a common source image, we are able to bypass the learning phase and also able to reduce the dictionary from a collection of hundreds of images to a single image. By adapting recent developments in ANNF computation, to suit super-resolution, we are able to perform much faster and accurate SR than existing techniques. To establish this claim, we compare the proposed algorithm against various state-of-the-art algorithms, and show that we are able to achieve b etter and faster reconstruction without any training.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rainbow matching of an edge-colored graph G is a matching in which no two edges have the same color. There have been several studies regarding the maximum size of a rainbow matching in a properly edge-colored graph G in terms of its minimum degree 3(G). Wang (2011) asked whether there exists a function f such that a properly edge-colored graph G with at least f (delta(G)) vertices is guaranteed to contain a rainbow matching of size delta(G). This was answered in the affirmative later: the best currently known function Lo and Tan (2014) is f(k) = 4k - 4, for k >= 4 and f (k) = 4k - 3, for k <= 3. Afterwards, the research was focused on finding lower bounds for the size of maximum rainbow matchings in properly edge-colored graphs with fewer than 4 delta(G) - 4 vertices. Strong edge-coloring of a graph G is a restriction of proper edge-coloring where every color class is required to be an induced matching, instead of just being a matching. In this paper, we give lower bounds for the size of a maximum rainbow matching in a strongly edge-colored graph Gin terms of delta(G). We show that for a strongly edge-colored graph G, if |V(G)| >= 2 |3 delta(G)/4|, then G has a rainbow matching of size |3 delta(G)/4|, and if |V(G)| < 2 |3 delta(G)/4|, then G has a rainbow matching of size |V(G)|/2] In addition, we prove that if G is a strongly edge-colored graph that is triangle-free, then it contains a rainbow matching of size at least delta(G). (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a continuum percolation model consisting of two types of nodes, namely legitimate and eavesdropper nodes, distributed according to independent Poisson point processes in R-2 of intensities lambda and lambda(E), respectively. A directed edge from one legitimate node A to another legitimate node B exists provided that the strength of the signal transmitted from node A that is received at node B is higher than that received at any eavesdropper node. The strength of the signal received at a node from a legitimate node depends not only on the distance between these nodes, but also on the location of the other legitimate nodes and an interference suppression parameter gamma. The graph is said to percolate when there exists an infinitely connected component. We show that for any finite intensity lambda(E) of eavesdropper nodes, there exists a critical intensity lambda(c) < infinity such that for all lambda > lambda(c) the graph percolates for sufficiently small values of the interference parameter. Furthermore, for the subcritical regime, we show that there exists a lambda(0) such that for all lambda < lambda(0) <= lambda(c) a suitable graph defined over eavesdropper node connections percolates that precludes percolation in the graphs formed by the legitimate nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate the problem of timing recovery for 2-D magnetic recording (TDMR) channels. We develop a timing error model for TDMR channel considering the phase and frequency offsets with noise. We propose a 2-D data-aided phase-locked loop (PLL) architecture for tracking variations in the position and movement of the read head in the down-track and cross-track directions and analyze the convergence of the algorithm under non-separable timing errors. We further develop a 2-D interpolation-based timing recovery scheme that works in conjunction with the 2-D PLL. We quantify the efficiency of our proposed algorithms by simulations over a 2-D magnetic recording channel with timing errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the problem of finding small s-t separators that induce graphs having certain properties. It is known that finding a minimum clique s-t separator is polynomial-time solvable (Tarjan in Discrete Math. 55:221-232, 1985), while for example the problems of finding a minimum s-t separator that induces a connected graph or forms an independent set are fixed-parameter tractable when parameterized by the size of the separator (Marx et al. in ACM Trans. Algorithms 9(4): 30, 2013). Motivated by these results, we study properties that generalize cliques, independent sets, and connected graphs, and determine the complexity of finding separators satisfying these properties. We investigate these problems also on bounded-degree graphs. Our results are as follows: Finding a minimum c-connected s-t separator is FPT for c=2 and W1]-hard for any ca parts per thousand yen3. Finding a minimum s-t separator with diameter at most d is W1]-hard for any da parts per thousand yen2. Finding a minimum r-regular s-t separator is W1]-hard for any ra parts per thousand yen1. For any decidable graph property, finding a minimum s-t separator with this property is FPT parameterized jointly by the size of the separator and the maximum degree. Finding a connected s-t separator of minimum size does not have a polynomial kernel, even when restricted to graphs of maximum degree at most 3, unless .