49 resultados para Notion of code


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Motivated by applications to distributed storage, Gopalan et al recently introduced the interesting notion of information-symbol locality in a linear code. By this it is meant that each message symbol appears in a parity-check equation associated with small Hamming weight, thereby enabling recovery of the message symbol by examining a small number of other code symbols. This notion is expanded to the case when all code symbols, not just the message symbols, are covered by such ``local'' parity. In this paper, we extend the results of Gopalan et. al. so as to permit recovery of an erased code symbol even in the presence of errors in local parity symbols. We present tight bounds on the minimum distance of such codes and exhibit codes that are optimal with respect to the local error-correction property. As a corollary, we obtain an upper bound on the minimum distance of a concatenated code.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Time series classification deals with the problem of classification of data that is multivariate in nature. This means that one or more of the attributes is in the form of a sequence. The notion of similarity or distance, used in time series data, is significant and affects the accuracy, time, and space complexity of the classification algorithm. There exist numerous similarity measures for time series data, but each of them has its own disadvantages. Instead of relying upon a single similarity measure, our aim is to find the near optimal solution to the classification problem by combining different similarity measures. In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and present promising results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work derives inner and outer bounds on the generalized degrees of freedom (GDOF) of the K-user symmetric MIMO Gaussian interference channel. For the inner bound, an achievable GDOF is derived by employing a combination of treating interference as noise, zero-forcing at the receivers, interference alignment (IA), and extending the Han-Kobayashi (HK) scheme to K users, depending on the number of antennas and the INR/SNR level. An outer bound on the GDOF is derived, using a combination of the notion of cooperation and providing side information to the receivers. Several interesting conclusions are drawn from the bounds. For example, in terms of the achievable GDOF in the weak interference regime, when the number of transmit antennas (M) is equal to the number of receive antennas (N), treating interference as noise performs the same as the HK scheme and is GDOF optimal. For K >; N/M+1, a combination of the HK and IA schemes performs the best among the schemes considered. However, for N/M <; K ≤ N/M+1, the HK scheme is found to be GDOF optimal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The notion of the 1-D analytic signal is well understood and has found many applications. At the heart of the analytic signal concept is the Hilbert transform. The problem in extending the concept of analytic signal to higher dimensions is that there is no unique multidimensional definition of the Hilbert transform. Also, the notion of analyticity is not so well under stood in higher dimensions. Of the several 2-D extensions of the Hilbert transform, the spiral-phase quadrature transform or the Riesz transform seems to be the natural extension and has attracted a lot of attention mainly due to its isotropic properties. From the Riesz transform, Larkin et al. constructed a vortex operator, which approximates the quadratures based on asymptotic stationary-phase analysis. In this paper, we show an alternative proof for the quadrature approximation property by invoking the quasi-eigenfunction property of linear, shift-invariant systems. We show that the vortex operator comes up as a natural consequence of applying this property. We also characterize the quadrature approximation error in terms of its energy as well as the peak spatial-domain error. Such results are available for 1-D signals, but their counter part for 2-D signals have not been provided. We also provide simulation results to supplement the analytical calculations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Birkhoff-James orthogonality is a generalization of Hilbert space orthogonality to Banach spaces. We investigate this notion of orthogonality when the Banach space has more structures. We start by doing so for the Banach space of square matrices moving gradually to all bounded operators on any Hilbert space, then to an arbitrary C*-algebra and finally a Hilbert C*-module.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The presence of software bloat in large flexible software systems can hurt energy efficiency. However, identifying and mitigating bloat is fairly effort intensive. To enable such efforts to be directed where there is a substantial potential for energy savings, we investigate the impact of bloat on power consumption under different situations. We conduct the first systematic experimental study of the joint power-performance implications of bloat across a range of hardware and software configurations on modern server platforms. The study employs controlled experiments to expose different effects of a common type of Java runtime bloat, excess temporary objects, in the context of the SPECPower_ssj2008 workload. We introduce the notion of equi-performance power reduction to characterize the impact, in addition to peak power comparisons. The results show a wide variation in energy savings from bloat reduction across these configurations. Energy efficiency benefits at peak performance tend to be most pronounced when bloat affects a performance bottleneck and non-bloated resources have low energy-proportionality. Equi-performance power savings are highest when bloated resources have a high degree of energy proportionality. We develop an analytical model that establishes a general relation between resource pressure caused by bloat and its energy efficiency impact under different conditions of resource bottlenecks and energy proportionality. Applying the model to different "what-if" scenarios, we predict the impact of bloat reduction and corroborate these predictions with empirical observations. Our work shows that the prevalent software-only view of bloat is inadequate for assessing its power-performance impact and instead provides a full systems approach for reasoning about its implications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Let k be an integer and k >= 3. A graph G is k-chordal if G does not have an induced cycle of length greater than k. From the definition it is clear that 3-chordal graphs are precisely the class of chordal graphs. Duchet proved that, for every positive integer m, if G m is chordal then so is G(m+2). Brandst `` adt et al. in Andreas Brandsadt, Van Bang Le, and Thomas Szymczak. Duchet- type theorems for powers of HHD- free graphs. Discrete Mathematics, 177(1- 3): 9- 16, 1997.] showed that if G m is k - chordal, then so is G(m+2). Powering a bipartite graph does not preserve its bipartitedness. In order to preserve the bipartitedness of a bipartite graph while powering Chandran et al. introduced the notion of bipartite powering. This notion was introduced to aid their study of boxicity of chordal bipartite graphs. The m - th bipartite power G(m]) of a bipartite graph G is the bipartite graph obtained from G by adding edges (u; v) where d G (u; v) is odd and less than or equal to m. Note that G(m]) = G(m+1]) for each odd m. In this paper we show that, given a bipartite graph G, if G is k-chordal then so is G m], where k, m are positive integers with k >= 4

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The formation of radialene complex 6M proceeds through a three-membered metallacyclopropene complex 7M, contrary to the prevailing notion of simple dimerization of metallacyclocumulene 1M. The 1M-7M equilibrium, which is predominantly governed by the size-dependent ligand binding of the metal atoms, plays a decisive role in the chemistry of Cp2M-ligand complexes. This size dependency is further fine-tuned by the substituents on the substrates and helps in exploiting these classes of metallacycles to generate new chemistry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Regenerating codes and codes with locality are two coding schemes that have recently been proposed, which in addition to ensuring data collection and reliability, also enable efficient node repair. In a situation where one is attempting to repair a failed node, regenerating codes seek to minimize the amount of data downloaded for node repair, while codes with locality attempt to minimize the number of helper nodes accessed. This paper presents results in two directions. In one, this paper extends the notion of codes with locality so as to permit local recovery of an erased code symbol even in the presence of multiple erasures, by employing local codes having minimum distance >2. An upper bound on the minimum distance of such codes is presented and codes that are optimal with respect to this bound are constructed. The second direction seeks to build codes that combine the advantages of both codes with locality as well as regenerating codes. These codes, termed here as codes with local regeneration, are codes with locality over a vector alphabet, in which the local codes themselves are regenerating codes. We derive an upper bound on the minimum distance of vector-alphabet codes with locality for the case when their constituent local codes have a certain uniform rank accumulation property. This property is possessed by both minimum storage regeneration (MSR) and minimum bandwidth regeneration (MBR) codes. We provide several constructions of codes with local regeneration which achieve this bound, where the local codes are either MSR or MBR codes. Also included in this paper, is an upper bound on the minimum distance of a general vector code with locality as well as the performance comparison of various code constructions of fixed block length and minimum distance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Small covers were introduced by Davis and Januszkiewicz in 1991. We introduce the notion of equilibrium triangulations for small covers. We study equilibrium and vertex minimal 4-equivariant triangulations of 2-dimensional small covers. We discuss vertex minimal equilibrium triangulations of RP3#RP3, S-1 x RP2 and a nontrivial S-1 bundle over RP2. We construct some nice equilibrium triangulations of the real projective space RPn with 2(n) + n 1 vertices. The main tool is the theory of small covers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work deals with the homogenization of an initial- and boundary-value problem for the doubly-nonlinear system D(t)w - del.(z) over right arrow = g(x, t, x/epsilon) (0.1) w is an element of alpha(u, x/epsilon) (0.2) (z) over right arrow is an element of (gamma) over right arrow (del u, x/epsilon) (0.3) Here epsilon is a positive parameter; alpha and (gamma) over right arrow are maximal monotone with respect to the first variable and periodic with respect to the second one. The inclusions (0.2) and (0.3) are here formulated as null-minimization principles, via the theory of Fitzpatrick MR 1009594]. As epsilon -> 0, a two-scale formulation is derived via Nguetseng's notion of two-scale convergence, and a (single-scale) homogenized problem is then retrieved. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work deals with the homogenization of an initial- and boundary-value problem for the doubly-nonlinear system D(t)w - del.(z) over right arrow = g(x, t, x/epsilon) (0.1) w is an element of alpha(u, x/epsilon) (0.2) (z) over right arrow is an element of (gamma) over right arrow (del u, x/epsilon) (0.3) Here epsilon is a positive parameter; alpha and (gamma) over right arrow are maximal monotone with respect to the first variable and periodic with respect to the second one. The inclusions (0.2) and (0.3) are here formulated as null-minimization principles, via the theory of Fitzpatrick MR 1009594]. As epsilon -> 0, a two-scale formulation is derived via Nguetseng's notion of two-scale convergence, and a (single-scale) homogenized problem is then retrieved. (C) 2015 Elsevier Ltd. All rights reserved.