980 resultados para problem complexity
Resumo:
We consider optimal power allocation policies for a single server, multiuser system. The power is consumed in transmission of data only. The transmission channel may experience multipath fading. We obtain very efficient, low computational complexity algorithms which minimize power and ensure stability of the data queues. We also obtain policies when the users may have mean delay constraints. If the power required is a linear function of rate then we exploit linearity and obtain linear programs with low complexity.
Resumo:
A residual based a posteriori error estimator is derived for a quadratic finite element method (FEM) for the elliptic obstacle problem. The error estimator involves various residuals consisting of the data of the problem, discrete solution and a Lagrange multiplier related to the obstacle constraint. The choice of the discrete Lagrange multiplier yields an error estimator that is comparable with the error estimator in the case of linear FEM. Further, an a priori error estimate is derived to show that the discrete Lagrange multiplier converges at the same rate as that of the discrete solution of the obstacle problem. The numerical experiments of adaptive FEM show optimal order convergence. This demonstrates that the quadratic FEM for obstacle problem exhibits optimal performance.
Resumo:
We revisit the a posteriori error analysis of discontinuous Galerkin methods for the obstacle problem derived in 25]. Under a mild assumption on the trace of obstacle, we derive a reliable a posteriori error estimator which does not involve min/max functions. A key in this approach is an auxiliary problem with discrete obstacle. Applications to various discontinuous Galerkin finite element methods are presented. Numerical experiments show that the new estimator obtained in this article performs better.
Resumo:
The classical Erdos-Szekeres theorem states that a convex k-gon exists in every sufficiently large point set. This problem has been well studied and finding tight asymptotic bounds is considered a challenging open problem. Several variants of the Erdos-Szekeres problem have been posed and studied in the last two decades. The well studied variants include the empty convex k-gon problem, convex k-gon with specified number of interior points and the chromatic variant. In this paper, we introduce the following two player game variant of the Erdos-Szekeres problem: Consider a two player game where each player playing in alternate turns, place points in the plane. The objective of the game is to avoid the formation of the convex k-gon among the placed points. The game ends when a convex k-gon is formed and the player who placed the last point loses the game. In our paper we show a winning strategy for the player who plays second in the convex 5-gon game and the empty convex 5-gon game by considering convex layer configurations at each step. We prove that the game always ends in the 9th step by showing that the game reaches a specific set of configurations.
Resumo:
Let X be a convex curve in the plane (say, the unit circle), and let be a family of planar convex bodies such that every two of them meet at a point of X. Then has a transversal of size at most . Suppose instead that only satisfies the following ``(p, 2)-condition'': Among every p elements of , there are two that meet at a common point of X. Then has a transversal of size . For comparison, the best known bound for the Hadwiger-Debrunner (p, q)-problem in the plane, with , is . Our result generalizes appropriately for if is, for example, the moment curve.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
The impulse response of wireless channels between the N-t transmit and N-r receive antennas of a MIMO-OFDM system are group approximately sparse (ga-sparse), i.e., NtNt the channels have a small number of significant paths relative to the channel delay spread and the time-lags of the significant paths between transmit and receive antenna pairs coincide. Often, wireless channels are also group approximately cluster-sparse (gac-sparse), i.e., every ga-sparse channel consists of clusters, where a few clusters have all strong components while most clusters have all weak components. In this paper, we cast the problem of estimating the ga-sparse and gac-sparse block-fading and time-varying channels in the sparse Bayesian learning (SBL) framework and propose a bouquet of novel algorithms for pilot-based channel estimation, and joint channel estimation and data detection, in MIMO-OFDM systems. The proposed algorithms are capable of estimating the sparse wireless channels even when the measurement matrix is only partially known. Further, we employ a first-order autoregressive modeling of the temporal variation of the ga-sparse and gac-sparse channels and propose a recursive Kalman filtering and smoothing (KFS) technique for joint channel estimation, tracking, and data detection. We also propose novel, parallel-implementation based, low-complexity techniques for estimating gac-sparse channels. Monte Carlo simulations illustrate the benefit of exploiting the gac-sparse structure in the wireless channel in terms of the mean square error (MSE) and coded bit error rate (BER) performance.
Resumo:
The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.
Resumo:
A reliable and efficient a posteriori error estimator is derived for a class of discontinuous Galerkin (DG) methods for the Signorini problem. A common property shared by many DG methods leads to a unified error analysis with the help of a constraint preserving enriching map. The error estimator of DG methods is comparable with the error estimator of the conforming methods. Numerical experiments illustrate the performance of the error estimator. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
We consider near-optimal policies for a single user transmitting on a wireless channel which minimize average queue length under average power constraint. The power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. Later, we also extend these results to the multiuser case. We show that our policies can be used in a system with energy harvesting sources at the transmitter. Next we consider data users which require minimum rate guarantees. Finally we consider the system which has both data and real time users. Our policies have low computational complexity, closed form expression for mean delays and require only the mean arrival rate with no queue length information.
Resumo:
In this paper, we study sum secrecy rate in multicarrier decode-and-forward relay beamforming. We obtain the optimal source power and relay weights on each subcarrier which maximize the sum secrecy rate. For a given total power on a given subcarrier k, P-0(k), we reformulate the optimization problem by relaxing the rank-1 constraint on the complex positive semidefinite relay weight matrix, and solve using semidefinite programming. We analytically prove that the solution to the relaxed optimization problem is indeed rank 1. We show that the subcarrier secrecy rate, R-s (P-0(k)), is a concave function in total power P-0(k) if R-s (P-0(k)) > 0 for any P-0(k) > 0. Numerical results show that the sum secrecy rate with optimal power allocation across subcarriers is more than the sum secrecy rate with equal power allocation. We also propose a low complexity suboptimal power allocation scheme which outperforms equal power allocation scheme.
Resumo:
Human detection is a complex problem owing to the variable pose that they can adopt. Here, we address this problem in sparse representation framework with an overcomplete scale-embedded dictionary. Histogram of oriented gradient features extracted from the candidate image patches are sparsely represented by the dictionary that contain positive bases along with negative and trivial bases. The object is detected based on the proposed likelihood measure obtained from the distribution of these sparse coefficients. The likelihood is obtained as the ratio of contribution of positive bases to negative and trivial bases. The positive bases of the dictionary represent the object (human) at various scales. This enables us to detect the object at any scale in one shot and avoids multiple scanning at different scales. This significantly reduces the computational complexity of detection task. In addition to human detection, it also finds the scale at which the human is detected due to the scale-embedded structure of the dictionary.