964 resultados para flame kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently it has been shown that the wave equations of bosonic higher spin fields in the BTZ background can be solved exactly. In this work we extend this analysis to fermionic higher spin fields. We solve the wave equations for arbitrary half-integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. These quasinormal modes are shown to agree precisely with the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We also obtain an expression for the 1-loop determinant for the Euclidean non-rotating BTZ black hole in terms of the quasinormal modes which agrees with that obtained by integrating the heat kernel found by group theoretic methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The nucleation and growth of vanadium oxide nanotubes (VOx-NT) have been followed by a combination of numerous ex situ techniques. long the hydrothermal process. Intermediate solid phases extracted at different reaction times have been characterized by powder X-ray diffraction, scanning and transmission electron microscopy, electron spin resonance, and V-K edge :X-ray absorption near-edge structure spectroscopy. The supernatant vanadate solutions extracted during the hydrothermal treatment have been studied by liquid V-51 NMR and flame. spectroscopy. For short durations of the hydrothermal synthesis, the initial V2O5-surfactant intercalate. is progressively transformed into VOx-NT whose crystallization starts to be detected after a hydrothermal treatment of 24 h. Upon heating from 24 h to 7 days, VOx-NT are obtained in larger amount and with an improved crystallinity. The detection of soluble amines and cyclic metavanadate V4O12](4-) in the supernatant solution along the hydrothermal process suggests that VOx-NT result from a dissolution precipitation mechanism. Metavanadate species V4O12](4-) could behave as molecular precursors in the polymerization reactions leading to VOx-NT.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents computational work on the biogas early phase combustion in spark ignition (SI) engines using detailed chemical kinetics. Specifically, the early phase combustion is studied to assess the effect of various ignition parameters such as spark plug location, spark energy, and number of spark plugs. An integrated version of the KIVA-3V and CHEMKIN codes was developed and used for the simulations utilizing detailed kinetics involving 325 reactions and 53 species The results show that location of the spark plug and local flow field play an important role. A central plug configuration, which is associated with higher local flow velocities in the vicinity of the spark plug, showed faster initial combustion. Although a dual plug configuration shows the highest rate of fuel consumption, it is comparable to the rate exhibited by the central plug case. The radical species important in the initiation of combustion are identified, and their concentrations are monitored during the early phase of combustion. The concentration of these radicals is also observed to correlate very well with the above-mentioned trend.Thus, the role of these radicals in promoting faster combustion has been clearly established. It is also observed that the minimum ignition energy required to initiate a self-sustained flame depends on the flow field condition in the vicinity of the spark plug.Increasing the methane content in the biogas has shown improved combustion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the speech production mechanism and the asso- ciated linear source-filter model. For voiced speech sounds in particular, the source/glottal excitation is modeled as a stream of impulses and the filter as a cascade of second-order resonators. We show that the process of sampling speech signals can be modeled as filtering a stream of Dirac impulses (a model for the excitation) with a kernel function (the vocal tract response),and then sampling uniformly. We show that the problem of esti- mating the excitation is equivalent to the problem of recovering a stream of Dirac impulses from samples of a filtered version. We present associated algorithms based on the annihilating filter and also make a comparison with the classical linear prediction technique, which is well known in speech analysis. Results on synthesized as well as natural speech data are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal acquisition under a compressed sensing scheme offers the possibility of acquisition and reconstruction of signals sparse on some basis incoherent with measurement kernel with sub-Nyquist number of measurements. In particular when the sole objective of the acquisition is the detection of the frequency of a signal rather than exact reconstruction, then an undersampling framework like CS is able to perform the task. In this paper we explore the possibility of acquisition and detection of frequency of multiple analog signals, heavily corrupted with additive white Gaussian noise. We improvise upon the MOSAICS architecture proposed by us in our previous work to include a wider class of signals having non-integral frequency components. This makes it possible to perform multiplexed compressed sensing for general frequency sparse signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Medical image segmentation finds application in computer-aided diagnosis, computer-guided surgery, measuring tissue volumes, locating tumors, and pathologies. One approach to segmentation is to use active contours or snakes. Active contours start from an initialization (often manually specified) and are guided by image-dependent forces to the object boundary. Snakes may also be guided by gradient vector fields associated with an image. The first main result in this direction is that of Xu and Prince, who proposed the notion of gradient vector flow (GVF), which is computed iteratively. We propose a new formalism to compute the vector flow based on the notion of bilateral filtering of the gradient field associated with the edge map - we refer to it as the bilateral vector flow (BVF). The range kernel definition that we employ is different from the one employed in the standard Gaussian bilateral filter. The advantage of the BVF formalism is that smooth gradient vector flow fields with enhanced edge information can be computed noniteratively. The quality of image segmentation turned out to be on par with that obtained using the GVF and in some cases better than the GVF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let be a noncompact symmetric space of higher rank. We consider two types of averages of functions: one, over level sets of the heat kernel on and the other, over geodesic spheres. We prove injectivity results for functions in which extend the results in Pati and Sitaram (Sankya Ser A 62:419-424, 2000).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies on a single-cavity, compact trapped vortex combustor concept showed good flame stability for a wide range of flow conditions. However, achieving good mixing between cavity products and mainstream flow was still a major challenge. In the present study, a passive mixing enhancement strategy of using inclined struts along with a flow guide vane is presented and experimentally tested at atmospheric pressure conditions. Results show excellent mixing and consequently low values of the combustor exit pattern factor in the range of 0.1 and small flame lengths (57 times the main-duct depth). The pressure drop is small in the range of 0.35%, and NOx levels of the order of 12ppm are achieved. The flame stability is excellent, and combustion efficiency is reasonable in the range of 96%. The effectiveness of the proposed strategy is explained on the basis of in-situ OH chemiluminescence images and prior numerical simulations of the resulting complex flow field. The flow guide vane is observed to lead to a counterclockwise cavity vortex, which is conducive to the rise of cavity combustion products along the inclined struts and subsequent mixing with the mainstream flow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Each new generation of GPUs vastly increases the resources available to GPGPU programs. GPU programming models (like CUDA) were designed to scale to use these resources. However, we find that CUDA programs actually do not scale to utilize all available resources, with over 30% of resources going unused on average for programs of the Parboil2 suite that we used in our work. Current GPUs therefore allow concurrent execution of kernels to improve utilization. In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we find concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage. We then propose several elastic-kernel aware concurrency policies that offer significantly better performance and concurrency compared to the current CUDA policy. We evaluate our proposals on real hardware using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite. On average, our proposals increase system throughput (STP) by 1.21x and improve the average normalized turnaround time (ANTT) by 3.73x for two-program workloads when compared to the current CUDA concurrency implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let M be the completion of the polynomial ring C(z) under bar] with respect to some inner product, and for any ideal I subset of C (z) under bar], let I] be the closure of I in M. For a homogeneous ideal I, the joint kernel of the submodule I] subset of M is shown, after imposing some mild conditions on M, to be the linear span of the set of vectors {p(i)(partial derivative/partial derivative(w) over bar (1),...,partial derivative/partial derivative(w) over bar (m)) K-I] (., w)vertical bar(w=0), 1 <= i <= t}, where K-I] is the reproducing kernel for the submodule 2] and p(1),..., p(t) is some minimal ``canonical set of generators'' for the ideal I. The proof includes an algorithm for constructing this canonical set of generators, which is determined uniquely modulo linear relations, for homogeneous ideals. A short proof of the ``Rigidity Theorem'' using the sheaf model for Hilbert modules over polynomial rings is given. We describe, via the monoidal transformation, the construction of a Hermitian holomorphic line bundle for a large class of Hilbert modules of the form I]. We show that the curvature, or even its restriction to the exceptional set, of this line bundle is an invariant for the unitary equivalence class of I]. Several examples are given to illustrate the explicit computation of these invariants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of finding a satisfying assignment that minimizes the number of variables that are set to 1 is NP-complete even for a satisfiable 2-SAT formula. We call this problem MIN ONES 2-SAT. It generalizes the well-studied problem of finding the smallest vertex cover of a graph, which can be modeled using a 2-SAT formula with no negative literals. The natural parameterized version of the problem asks for a satisfying assignment of weight at most k. In this paper, we present a polynomial-time reduction from MIN ONES 2-SAT to VERTEX COVER without increasing the parameter and ensuring that the number of vertices in the reduced instance is equal to the number of variables of the input formula. Consequently, we conclude that this problem also has a simple 2-approximation algorithm and a 2k - c logk-variable kernel subsuming (or, in the case of kernels, improving) the results known earlier. Further, the problem admits algorithms for the parameterized and optimization versions whose runtimes will always match the runtimes of the best-known algorithms for the corresponding versions of vertex cover. Finally we show that the optimum value of the LP relaxation of the MIN ONES 2-SAT and that of the corresponding VERTEX COVER are the same. This implies that the (recent) results of VERTEX COVER version parameterized above the optimum value of the LP relaxation of VERTEX COVER carry over to the MIN ONES 2-SAT version parameterized above the optimum of the LP relaxation of MIN ONES 2-SAT. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rapid advancements in multi-core processor architectures coupled with low-cost, low-latency, high-bandwidth interconnects have made clusters of multi-core machines a common computing resource. Unfortunately, writing good parallel programs that efficiently utilize all the resources in such a cluster is still a major challenge. Various programming languages have been proposed as a solution to this problem, but are yet to be adopted widely to run performance-critical code mainly due to the relatively immature software framework and the effort involved in re-writing existing code in the new language. In this paper, we motivate and describe our initial study in exploring CUDA as a programming language for a cluster of multi-cores. We develop CUDA-For-Clusters (CFC), a framework that transparently orchestrates execution of CUDA kernels on a cluster of multi-core machines. The well-structured nature of a CUDA kernel, the growing popularity, support and stability of the CUDA software stack collectively make CUDA a good candidate to be considered as a programming language for a cluster. CFC uses a mixture of source-to-source compiler transformations, a work distribution runtime and a light-weight software distributed shared memory to manage parallel executions. Initial results on running several standard CUDA benchmark programs achieve impressive speedups of up to 7.5X on a cluster with 8 nodes, thereby opening up an interesting direction of research for further investigation.