205 resultados para computational complexity


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, the approach for assigning cooperative communication of Uninhabited Aerial Vehicles (UAV) to perform multiple tasks on multiple targets is posed as a combinatorial optimization problem. The multiple task such as classification, attack and verification of target using UAV is employed using nature inspired techniques such as Artificial Immune System (AIS), Particle Swarm Optimization (PSO) and Virtual Bee Algorithm (VBA). The nature inspired techniques have an advantage over classical combinatorial optimization methods like prohibitive computational complexity to solve this NP-hard problem. Using the algorithms we find the best sequence in which to attack and destroy the targets while minimizing the total distance traveled or the maximum distance traveled by an UAV. The performance analysis of the UAV to classify, attack and verify the target is evaluated using AIS, PSO and VBA.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a low-complexity algorithm SAGE-USL is presented for 3-dimensional (3-D) localization of multiple acoustic sources in a shallow ocean with non-Gaussian ambient noise, using a vertical and a horizontal linear array of sensors. In the proposed method, noise is modeled as a Gaussian mixture. Initial estimates of the unknown parameters (source coordinates, signal waveforms and noise parameters) are obtained by known/conventional methods, and a generalized expectation maximization algorithm is used to update the initial estimates iteratively. Simulation results indicate that convergence is reached in a small number of (<= 10) iterations. Initialization requires one 2-D search and one 1-D search, and the iterative updates require a sequence of 1-D searches. Therefore the computational complexity of the SAGE-USL algorithm is lower than that of conventional techniques such as 3-D MUSIC by several orders of magnitude. We also derive the Cramer-Rao Bound (CRB) for 3-D localization of multiple sources in a range-independent ocean. Simulation results are presented to show that the root-mean-square localization errors of SAGE-USL are close to the corresponding CRBs and significantly lower than those of 3-D MUSIC. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose an eigen framework for transmit beamforming for single-hop and dual-hop network models with single antenna receivers. In cases where number of receivers is not more than three, the proposed Eigen approach is vastly superior in terms of ease of implementation and computational complexity compared with the existing convex-relaxation-based approaches. The essential premise is that the precoding problems can be posed as equivalent optimization problems of searching for an optimal vector in the joint numerical range of Hermitian matrices. We show that the latter problem has two convex approximations: the first one is a semi-definite program that yields a lower bound on the solution, and the second one is a linear matrix inequality that yields an upper bound on the solution. We study the performance of the proposed and existing techniques using numerical simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A design methodology based on the Minimum Bit Error Ratio (MBER) framework is proposed for a non-regenerative Multiple-Input Multiple-Output (MIMO) relay-aided system to determine various linear parameters. We consider both the Relay-Destination (RD) as well as the Source-Relay-Destination (SRD) link design based on this MBER framework, including the pre-coder, the Amplify-and-Forward (AF) matrix and the equalizer matrix of our system. It has been shown in the previous literature that MBER based communication systems are capable of reducing the Bit-Error-Ratio (BER) compared to their Linear Minimum Mean Square Error (LMMSE) based counterparts. We design a novel relay-aided system using various signal constellations, ranging from QPSK to the general M-QAM and M-PSK constellations. Finally, we propose its sub-optimal versions for reducing the computational complexity imposed. Our simulation results demonstrate that the proposed scheme indeed achieves a significant BER reduction over the existing LMMSE scheme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Given a Boolean function , we say a triple (x, y, x + y) is a triangle in f if . A triangle-free function contains no triangle. If f differs from every triangle-free function on at least points, then f is said to be -far from triangle-free. In this work, we analyze the query complexity of testers that, with constant probability, distinguish triangle-free functions from those -far from triangle-free. Let the canonical tester for triangle-freeness denotes the algorithm that repeatedly picks x and y uniformly and independently at random from , queries f(x), f(y) and f(x + y), and checks whether f(x) = f(y) = f(x + y) = 1. Green showed that the canonical tester rejects functions -far from triangle-free with constant probability if its query complexity is a tower of 2's whose height is polynomial in . Fox later improved the height of the tower in Green's upper bound to . A trivial lower bound of on the query complexity is immediate. In this paper, we give the first non-trivial lower bound for the number of queries needed. We show that, for every small enough , there exists an integer such that for all there exists a function depending on all n variables which is -far from being triangle-free and requires queries for the canonical tester. We also show that the query complexity of any general (possibly adaptive) one-sided tester for triangle-freeness is at least square root of the query complexity of the corresponding canonical tester. Consequently, this means that any one-sided tester for triangle-freeness must make at least queries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Opportunistic selection in multi-node wireless systems improves system performance by selecting the ``best'' node and by using it for data transmission. In these systems, each node has a real-valued local metric, which is a measure of its ability to improve system performance. Our goal is to identify the best node, which has the largest metric. We propose, analyze, and optimize a new distributed, yet simple, node selection scheme that combines the timer scheme with power control. In it, each node sets a timer and transmit power level as a function of its metric. The power control is designed such that the best node is captured even if. other nodes simultaneously transmit with it. We develop several structural properties about the optimal metric-to-timer-and-power mapping, which maximizes the probability of selecting the best node. These significantly reduce the computational complexity of finding the optimal mapping and yield valuable insights about it. We show that the proposed scheme is scalable and significantly outperforms the conventional timer scheme. We investigate the effect of. and the number of receive power levels. Furthermore, we find that the practical peak power constraint has a negligible impact on the performance of the scheme.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Compressed Sensing (CS) is an elegant technique to acquire signals and reconstruct them efficiently by solving a system of under-determined linear equations. The excitement in this field stems from the fact that we can sample at a rate way below the Nyquist rate and still reconstruct the signal provided some conditions are met. Some of the popular greedy reconstruction algorithms are the Orthogonal Matching Pursuit (OMP), the Subspace Pursuit (SP) and the Look Ahead Orthogonal Matching Pursuit (LAOMP). The LAOMP performs better than the OMP. However, when compared to the SP and the OMP, the computational complexity of LAOMP is higher. We introduce a modified version of the LAOMP termed as Reduced Look Ahead Orthogonal Matching Pursuit (Reduced LAOMP). Reduced LAOMP uses prior information from the results of the OMP and the SP in the quest to speedup the look ahead strategy in the LAOMP. Monte Carlo simulations of this algorithm deliver promising results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human detection is a complex problem owing to the variable pose that they can adopt. Here, we address this problem in sparse representation framework with an overcomplete scale-embedded dictionary. Histogram of oriented gradient features extracted from the candidate image patches are sparsely represented by the dictionary that contain positive bases along with negative and trivial bases. The object is detected based on the proposed likelihood measure obtained from the distribution of these sparse coefficients. The likelihood is obtained as the ratio of contribution of positive bases to negative and trivial bases. The positive bases of the dictionary represent the object (human) at various scales. This enables us to detect the object at any scale in one shot and avoids multiple scanning at different scales. This significantly reduces the computational complexity of detection task. In addition to human detection, it also finds the scale at which the human is detected due to the scale-embedded structure of the dictionary.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A common form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires O(sigma(2)) operations per pixel, where sigma is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approximation algorithm that can cut down the computational complexity to O(1) per pixel for any arbitrary sigma (constant-time implementation). This is based on the observation that the range kernel operates via the translations of a fixed Gaussian over the range space, and that these translated Gaussians can be accurately approximated using the so-called Gauss-polynomials. The overall algorithm emerging from this approximation involves a series of spatial Gaussian filtering, which can be efficiently implemented (in parallel) using separability and recursion. We present some preliminary results to demonstrate that the proposed algorithm compares favorably with some of the existing fast algorithms in terms of speed and accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present two discriminative language modelling techniques for Lempel-Ziv-Welch (LZW) based LID system. The previous approach to LID using LZW algorithm was to directly use the LZW pattern tables forlanguage modelling. But, since the patterns in a language pattern table are shared by other language pattern tables, confusability prevailed in the LID task. For overcoming this, we present two pruning techniques (i) Language Specific (LS-LZW)-in which patterns common to more than one pattern table are removed. (ii) Length-Frequency product based (LF-LZW)-in which patterns having their length-frequency product below a threshold are removed. These approaches reduce the classification score (Compression Ratio [LZW-CR] or the weighted discriminant score [LZW-WDS]) for non native languages and increases the LID performance considerably. Also the memory and computational requirements of these techniques are much less compared to basic LZW techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the POSSIBLE WINNER problem in computational social choice theory, we are given a set of partial preferences and the question is whether a distinguished candidate could be made winner by extending the partial preferences to linear preferences. Previous work has provided, for many common voting rules, fixed parameter tractable algorithms for the POSSIBLE WINNER problem, with number of candidates as the parameter. However, the corresponding kernelization question is still open and in fact, has been mentioned as a key research challenge 10]. In this paper, we settle this open question for many common voting rules. We show that the POSSIBLE WINNER problem for maximin, Copeland, Bucklin, ranked pairs, and a class of scoring rules that includes the Borda voting rule does not admit a polynomial kernel with the number of candidates as the parameter. We show however that the COALITIONAL MANIPULATION problem which is an important special case of the POSSIBLE WINNER problem does admit a polynomial kernel for maximin, Copeland, ranked pairs, and a class of scoring rules that includes the Borda voting rule, when the number of manipulators is polynomial in the number of candidates. A significant conclusion of our work is that the POSSIBLE WINNER problem is harder than the COALITIONAL MANIPULATION problem since the COALITIONAL MANIPULATION problem admits a polynomial kernel whereas the POSSIBLE WINNER problem does not admit a polynomial kernel. (C) 2015 Elsevier B.V. All rights reserved.