884 resultados para kernel
Resumo:
We consider the speech production mechanism and the asso- ciated linear source-filter model. For voiced speech sounds in particular, the source/glottal excitation is modeled as a stream of impulses and the filter as a cascade of second-order resonators. We show that the process of sampling speech signals can be modeled as filtering a stream of Dirac impulses (a model for the excitation) with a kernel function (the vocal tract response),and then sampling uniformly. We show that the problem of esti- mating the excitation is equivalent to the problem of recovering a stream of Dirac impulses from samples of a filtered version. We present associated algorithms based on the annihilating filter and also make a comparison with the classical linear prediction technique, which is well known in speech analysis. Results on synthesized as well as natural speech data are presented.
Resumo:
Signal acquisition under a compressed sensing scheme offers the possibility of acquisition and reconstruction of signals sparse on some basis incoherent with measurement kernel with sub-Nyquist number of measurements. In particular when the sole objective of the acquisition is the detection of the frequency of a signal rather than exact reconstruction, then an undersampling framework like CS is able to perform the task. In this paper we explore the possibility of acquisition and detection of frequency of multiple analog signals, heavily corrupted with additive white Gaussian noise. We improvise upon the MOSAICS architecture proposed by us in our previous work to include a wider class of signals having non-integral frequency components. This makes it possible to perform multiplexed compressed sensing for general frequency sparse signals.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
The paper addresses experiments and modeling studies on the use of producer gas, a bio-derived low energy content fuel in a spark-ignited engine. Producer gas, generated in situ, has thermo-physical properties different from those of fossil fuel(s). Experiments on naturally aspirated and turbo-charged engine operation and subsequent analysis of the cylinder pressure traces reveal significant differences in the heat release pattern within the cylinder compared with a typical fossil fuel. The heat release patterns for gasoline and producer gas compare well in the initial 50% but beyond this, producer gas combustion tends to be sluggish leading to an overall increase in the combustion duration. This is rather unexpected considering that producer gas with nearly 20% hydrogen has higher flame speeds than gasoline. The influence of hydrogen on the initial flame kernel development period and the combustion duration and hence on the overall heat release pattern is addressed. The significant deviations in the heat release profiles between conventional fuels and producer gas necessitates the estimation of producer gas-specific Wiebe coefficients. The experimental heat release profiles are used for estimating the Wiebe coefficients. Experimental evidence of lower fuel conversion efficiency based on the chemical and thermal analysis of the engine exhaust gas is used to arrive at the Wiebe coefficients. The efficiency factor a is found to be 2.4 while the shape factor m is estimated at 0.7 for 2% to 90% burn duration. The standard Wiebe coefficients for conventional fuels and fuel-specific coefficients for producer gas are used in a zero D model to predict the performance of a 6-cylinder gas engine under naturally aspirated and turbo-charged conditions. While simulation results with standard Wiebe coefficients result in excessive deviations from the experimental results, excellent match is observed when producer gas-specific coefficients are used. Predictions using the same coefficients on a 3-cylinder gas engine having different geometry and compression ratio(s) indicate close match with the experimental traces highlighting the versatility of the coefficients.
Resumo:
Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.
Resumo:
Let be a noncompact symmetric space of higher rank. We consider two types of averages of functions: one, over level sets of the heat kernel on and the other, over geodesic spheres. We prove injectivity results for functions in which extend the results in Pati and Sitaram (Sankya Ser A 62:419-424, 2000).
Resumo:
We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.
Resumo:
Each new generation of GPUs vastly increases the resources available to GPGPU programs. GPU programming models (like CUDA) were designed to scale to use these resources. However, we find that CUDA programs actually do not scale to utilize all available resources, with over 30% of resources going unused on average for programs of the Parboil2 suite that we used in our work. Current GPUs therefore allow concurrent execution of kernels to improve utilization. In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we find concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage. We then propose several elastic-kernel aware concurrency policies that offer significantly better performance and concurrency compared to the current CUDA policy. We evaluate our proposals on real hardware using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite. On average, our proposals increase system throughput (STP) by 1.21x and improve the average normalized turnaround time (ANTT) by 3.73x for two-program workloads when compared to the current CUDA concurrency implementation.
Resumo:
Let M be the completion of the polynomial ring C(z) under bar] with respect to some inner product, and for any ideal I subset of C (z) under bar], let I] be the closure of I in M. For a homogeneous ideal I, the joint kernel of the submodule I] subset of M is shown, after imposing some mild conditions on M, to be the linear span of the set of vectors {p(i)(partial derivative/partial derivative(w) over bar (1),...,partial derivative/partial derivative(w) over bar (m)) K-I] (., w)vertical bar(w=0), 1 <= i <= t}, where K-I] is the reproducing kernel for the submodule 2] and p(1),..., p(t) is some minimal ``canonical set of generators'' for the ideal I. The proof includes an algorithm for constructing this canonical set of generators, which is determined uniquely modulo linear relations, for homogeneous ideals. A short proof of the ``Rigidity Theorem'' using the sheaf model for Hilbert modules over polynomial rings is given. We describe, via the monoidal transformation, the construction of a Hermitian holomorphic line bundle for a large class of Hilbert modules of the form I]. We show that the curvature, or even its restriction to the exceptional set, of this line bundle is an invariant for the unitary equivalence class of I]. Several examples are given to illustrate the explicit computation of these invariants.
Resumo:
The problem of finding a satisfying assignment that minimizes the number of variables that are set to 1 is NP-complete even for a satisfiable 2-SAT formula. We call this problem MIN ONES 2-SAT. It generalizes the well-studied problem of finding the smallest vertex cover of a graph, which can be modeled using a 2-SAT formula with no negative literals. The natural parameterized version of the problem asks for a satisfying assignment of weight at most k. In this paper, we present a polynomial-time reduction from MIN ONES 2-SAT to VERTEX COVER without increasing the parameter and ensuring that the number of vertices in the reduced instance is equal to the number of variables of the input formula. Consequently, we conclude that this problem also has a simple 2-approximation algorithm and a 2k - c logk-variable kernel subsuming (or, in the case of kernels, improving) the results known earlier. Further, the problem admits algorithms for the parameterized and optimization versions whose runtimes will always match the runtimes of the best-known algorithms for the corresponding versions of vertex cover. Finally we show that the optimum value of the LP relaxation of the MIN ONES 2-SAT and that of the corresponding VERTEX COVER are the same. This implies that the (recent) results of VERTEX COVER version parameterized above the optimum value of the LP relaxation of VERTEX COVER carry over to the MIN ONES 2-SAT version parameterized above the optimum of the LP relaxation of MIN ONES 2-SAT. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.
Resumo:
Rapid advancements in multi-core processor architectures coupled with low-cost, low-latency, high-bandwidth interconnects have made clusters of multi-core machines a common computing resource. Unfortunately, writing good parallel programs that efficiently utilize all the resources in such a cluster is still a major challenge. Various programming languages have been proposed as a solution to this problem, but are yet to be adopted widely to run performance-critical code mainly due to the relatively immature software framework and the effort involved in re-writing existing code in the new language. In this paper, we motivate and describe our initial study in exploring CUDA as a programming language for a cluster of multi-cores. We develop CUDA-For-Clusters (CFC), a framework that transparently orchestrates execution of CUDA kernels on a cluster of multi-core machines. The well-structured nature of a CUDA kernel, the growing popularity, support and stability of the CUDA software stack collectively make CUDA a good candidate to be considered as a programming language for a cluster. CFC uses a mixture of source-to-source compiler transformations, a work distribution runtime and a light-weight software distributed shared memory to manage parallel executions. Initial results on running several standard CUDA benchmark programs achieve impressive speedups of up to 7.5X on a cluster with 8 nodes, thereby opening up an interesting direction of research for further investigation.
Resumo:
We propose to employ bilateral filters to solve the problem of edge detection. The proposed methodology presents an efficient and noise robust method for detecting edges. Classical bilateral filters smooth images without distorting edges. In this paper, we modify the bilateral filter to perform edge detection, which is the opposite of bilateral smoothing. The Gaussian domain kernel of the bilateral filter is replaced with an edge detection mask, and Gaussian range kernel is replaced with an inverted Gaussian kernel. The modified range kernel serves to emphasize dissimilar regions. The resulting approach effectively adapts the detection mask according as the pixel intensity differences. The results of the proposed algorithm are compared with those of standard edge detection masks. Comparisons of the bilateral edge detector with Canny edge detection algorithm, both after non-maximal suppression, are also provided. The results of our technique are observed to be better and noise-robust than those offered by methods employing masks alone, and are also comparable to the results from Canny edge detector, outperforming it in certain cases.
Resumo:
In this paper we establish that the Lovasz theta function on a graph can be restated as a kernel learning problem. We introduce the notion of SVM-theta graphs, on which Lovasz theta function can be approximated well by a Support vector machine (SVM). We show that Erdos-Renyi random G(n, p) graphs are SVM-theta graphs for log(4)n/n <= p < 1. Even if we embed a large clique of size Theta(root np/1-p) in a G(n, p) graph the resultant graph still remains a SVM-theta graph. This immediately suggests an SVM based algorithm for recovering a large planted clique in random graphs. Associated with the theta function is the notion of orthogonal labellings. We introduce common orthogonal labellings which extends the idea of orthogonal labellings to multiple graphs. This allows us to propose a Multiple Kernel learning (MKL) based solution which is capable of identifying a large common dense subgraph in multiple graphs. Both in the planted clique case and common subgraph detection problem the proposed solutions beat the state of the art by an order of magnitude.
Resumo:
This paper deals with the Schrodinger equation i partial derivative(s)u(z, t; s) - Lu(z, t; s) = 0; where L is the sub-Laplacian on the Heisenberg group. Assume that the initial data f satisfies vertical bar f(z, t)vertical bar less than or similar to q(alpha)(z, t), where q(s) is the heat kernel associated to L. If in addition vertical bar u(z, t; s(0))vertical bar less than or similar to q(beta)(z, t), for some s(0) is an element of R \textbackslash {0}, then we prove that u(z, t; s) = 0 for all s is an element of R whenever alpha beta < s(0)(2). This result holds true in the more general context of H-type groups. We also prove an analogous result for the Grushin operator on Rn+1.