169 resultados para Graph Decomposition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we employ message passing algorithms over graphical models to jointly detect and decode symbols transmitted over large multiple-input multiple-output (MIMO) channels with low density parity check (LDPC) coded bits. We adopt a factor graph based technique to integrate the detection and decoding operations. A Gaussian approximation of spatial interference is used for detection. This serves as a low complexity joint detection/decoding approach for large dimensional MIMO systems coded with LDPC codes of large block lengths. This joint processing achieves significantly better performance than the individual detection and decoding scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Users can rarely reveal their information need in full detail to a search engine within 1--2 words, so search engines need to "hedge their bets" and present diverse results within the precious 10 response slots. Diversity in ranking is of much recent interest. Most existing solutions estimate the marginal utility of an item given a set of items already in the response, and then use variants of greedy set cover. Others design graphs with the items as nodes and choose diverse items based on visit rates (PageRank). Here we introduce a radically new and natural formulation of diversity as finding centers in resistive graphs. Unlike in PageRank, we do not specify the edge resistances (equivalently, conductances) and ask for node visit rates. Instead, we look for a sparse set of center nodes so that the effective conductance from the center to the rest of the graph has maximum entropy. We give a cogent semantic justification for turning PageRank thus on its head. In marked deviation from prior work, our edge resistances are learnt from training data. Inference and learning are NP-hard, but we give practical solutions. In extensive experiments with subtopic retrieval, social network search, and document summarization, our approach convincingly surpasses recently-published diversity algorithms like subtopic cover, max-marginal relevance (MMR), Grasshopper, DivRank, and SVMdiv.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that every graph of maximum degree 3 can be represented as the intersection graph of axis parallel boxes in three dimensions, that is, every vertex can be mapped to an axis parallel box such that two boxes intersect if and only if their corresponding vertices are adjacent. In fact, we construct a representation in which any two intersecting boxes just touch at their boundaries. Further, this construction can be realized in linear time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A pairwise independent network (PIN) model consists of pairwise secret keys (SKs) distributed among m terminals. The goal is to generate, through public communication among the terminals, a group SK that is information-theoretically secure from an eavesdropper. In this paper, we study the Harary graph PIN model, which has useful fault-tolerant properties. We derive the exact SK capacity for a regular Harary graph PIN model. Lower and upper bounds on the fault-tolerant SK capacity of the Harary graph PIN model are also derived.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel numerical method based on a generalized eigenvalue decomposition for solving the diffusion equation governing the correlation diffusion of photons in turbid media. Medical imaging modalities such as diffuse correlation tomography and ultrasound-modulated optical tomography have the (elliptic) diffusion equation parameterized by a time variable as the forward model. Hitherto, for the computation of the correlation function, the diffusion equation is solved repeatedly over the time parameter. We show that the use of a certain time-independent generalized eigenfunction basis results in the decoupling of the spatial and time dependence of the correlation function, thus allowing greater computational efficiency in arriving at the forward solution. Besides presenting the mathematical analysis of the generalized eigenvalue problem on the basis of spectral theory, we put forth the numerical results that compare the proposed numerical method with the standard technique for solving the diffusion equation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peer to peer networks are being used extensively nowadays for file sharing, video on demand and live streaming. For IPTV, delay deadlines are more stringent compared to file sharing. Coolstreaming was the first P2P IPTV system. In this paper, we model New Coolstreaming (newer version of Coolstreaming) via a queueing network. We use two time scale decomposition of Markov chains to compute the stationary distribution of number of peers and the expected number of substreams in the overlay which are not being received at the required rate due to parent overloading. We also characterize the end-to-end delay encountered by a video packet received by a user and originated at the server. Three factors contribute towards the delay. The first factor is the mean shortest path length between any two overlay peers in terms of overlay hops of the partnership graph which is shown to be O (log n) where n is the number of peers in the overlay. The second factor is the mean number of routers between any two overlay neighbours which is seen to be at most O (log N-I) where N-I is the number of routers in the internet. Third factor is the mean delay at a router in the internet. We provide an approximation of this mean delay E W]. Thus, the mean end to end delay in New Coolstreaming is shown to be upper bounded by O (log E N]) (log N-I) E (W)] where E N] is the mean number of peers at a channel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entropy is a fundamental thermodynamic property that has attracted a wide attention across domains, including chemistry. Inference of entropy of chemical compounds using various approaches has been a widely studied topic. However, many aspects of entropy in chemical compounds remain unexplained. In the present work, we propose two new information-theoretical molecular descriptors for the prediction of gas phase thermal entropy of organic compounds. The descriptors reflect the bulk and size of the compounds as well as the gross topological symmetry in their structures, all of which are believed to determine entropy. A high correlation () between the entropy values and our information-theoretical indices have been found and the predicted entropy values, obtained from the corresponding statistically significant regression model, have been found to be within acceptable approximation. We provide additional mathematical result in the form of a theorem and proof that might further help in assessing changes in gas phase thermal entropy values with the changes in molecular structures. The proposed information-theoretical molecular descriptors, regression model and the mathematical result are expected to augment predictions of gas phase thermal entropy for a large number of chemical compounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present computer simulation study of two-dimensional infrared spectroscopy (2D-IR) of water confined in reverse micelles (RMs) of various sizes. The present study is motivated by the need to understand the altered dynamics of confined water by performing layerwise decomposition of water, with an aim to quantify the relative contributions of different layers water molecules to the calculated 2D-IR spectrum. The 0-1 transition spectra clearly show substantial elongation, due to in-homogeneous broadening and incomplete spectral diffusion, along the diagonal in the surface water layer of different sized RMs. Fitting of the frequency fluctuation correlation functions reveal that the motion of the surface water molecules is sub-diffusive and indicate the constrained nature of their dynamics. This is further supported by two peak nature of the angular analogue of van Hove correlation function. With increasing system size, the water molecules become more diffusive in nature and spectral diffusion almost completes in the central layer of the larger size RMs. Comparisons between experiments and simulations establish the correspondence between the spectral decomposition available in experiments with the spatial decomposition available in simulations. Simulations also allow a quantitative exploration of the relative role of water, sodium ions, and sulfonate head groups in vibrational dephasing. Interestingly, the negative cross correlation between force on oxygen and hydrogen of O-H bond in bulk water significantly decreases in the surface layer of each RM. This negative cross correlation gradually increases in the central water pool with increasing RMs size and this is found to be partly responsible for the faster relaxation rate of water in the central pool. (C) 2013 AIP Publishing LLC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Precise pointer analysis is a problem of interest to both the compiler and the program verification community. Flow-sensitivity is an important dimension of pointer analysis that affects the precision of the final result computed. Scaling flow-sensitive pointer analysis to millions of lines of code is a major challenge. Recently, staged flow-sensitive pointer analysis has been proposed, which exploits a sparse representation of program code created by staged analysis. In this paper we formulate the staged flow-sensitive pointer analysis as a graph-rewriting problem. Graph-rewriting has already been used for flow-insensitive analysis. However, formulating flow-sensitive pointer analysis as a graph-rewriting problem adds additional challenges due to the nature of flow-sensitivity. We implement our parallel algorithm using Intel Threading Building Blocks and demonstrate considerable scaling (upto 2.6x) for 8 threads on a set of 10 benchmarks. Compared to the sequential implementation of staged flow-sensitive analysis, a single threaded execution of our implementation performs better in 8 of the benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermal decomposition of propargyl alcohol (C3H3OH), a molecule of interest in interstellar chemistry and combustion, was investigated using a single pulse shock tube in the temperature ranging from 953 to 1262 K. The products identified include acetylene, propyne, vinylacetylene, propynal, propenal, and benzene. The experimentally observed overall rate constant for thermal decomposition of propargyl alcohol was found to be k = 10((10.17 +/- 0.36)) exp(-39.70 +/- 1.83)/RT) s(-1) Ab initio theoretical calculations were carried out to understand the potential energy surfaces involved in the primary and secondary steps of propargyl alcohol thermal decomposition. Transition state theory was used to predict the rate constants, which were then used and refined in a kinetic simulation of the product profile. The first step in the decomposition is C-O bond dissociation, leading to the formation of two important radicals in combustion, OH and propargyl. This has been used to study the reverse OH propargyl radical reaction, about which there appears to be no prior work. Depending on the site of attack, this reaction leads to propargyl alcohol or propenal, one of the major products at temperatures below 1200 K. A detailed mechanism has been derived to explain all the observed products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new representation of spatio-temporal random processes is proposed in this work. In practical applications, such processes are used to model velocity fields, temperature distributions, response of vibrating systems, to name a few. Finding an efficient representation for any random process leads to encapsulation of information which makes it more convenient for a practical implementations, for instance, in a computational mechanics problem. For a single-parameter process such as spatial or temporal process, the eigenvalue decomposition of the covariance matrix leads to the well-known Karhunen-Loeve (KL) decomposition. However, for multiparameter processes such as a spatio-temporal process, the covariance function itself can be defined in multiple ways. Here the process is assumed to be measured at a finite set of spatial locations and a finite number of time instants. Then the spatial covariance matrix at different time instants are considered to define the covariance of the process. This set of square, symmetric, positive semi-definite matrices is then represented as a third-order tensor. A suitable decomposition of this tensor can identify the dominant components of the process, and these components are then used to define a closed-form representation of the process. The procedure is analogous to the KL decomposition for a single-parameter process, however, the decompositions and interpretations vary significantly. The tensor decompositions are successfully applied on (i) a heat conduction problem, (ii) a vibration problem, and (iii) a covariance function taken from the literature that was fitted to model a measured wind velocity data. It is observed that the proposed representation provides an efficient approximation to some processes. Furthermore, a comparison with KL decomposition showed that the proposed method is computationally cheaper than the KL, both in terms of computer memory and execution time.