102 resultados para Dunkl Kernel
Resumo:
Hydrogen, either in pure form or as a gaseous fuel mixture specie enhances the fuel conversion efficiency and reduce emissions in an internal combustion engine. This is due to the reduction in combustion duration attributed to higher laminar flame speeds. Hydrogen is also expected to increase the engine convective heat flux, attributed (directly or indirectly) to parameters like higher adiabatic flame temperature, laminar flame speed, thermal conductivity and diffusivity and lower flame quenching distance. These factors (adversely) affect the thermo-kinematic response and offset some of the benefits. The current work addresses the influence of mixture hydrogen fraction in syngas on the engine energy balance and the thermo-kinematic response for close to stoichiometric operating conditions. Four different bio-derived syngas compositions with fuel calorific value varying from 3.14 MJ/kg to 7.55 MJ/kg and air fuel mixture hydrogen fraction varying from 7.1% to 14.2% by volume are used. The analysis comprises of (a) use of chemical kinetics simulation package CHEMKIN for quantifying the thermo-physical properties (b) 0-D model for engine in-cylinder analysis and (c) in-cylinder investigations on a two-cylinder engine in open loop cooling mode for quantifying the thermo-kinematic response and engine energy balance. With lower adiabatic flame temperature for Syngas, the in-cylinder heat transfer analysis suggests that temperature has little effect in terms of increasing the heat flux. For typical engine like conditions (700 K and 25 bar at CR of 10), the laminar flame speed for syngas exceeds that of methane (55.5 cm/s) beyond mixture hydrogen fraction of 11% and is attributed to the increase in H based radicals. This leads to a reduction in the effective Lewis number and laminar flame thickness, potentially inducing flame instability and cellularity. Use of a thermodynamic model to assess the isolated influence of thermal conductivity and diffusivity on heat flux suggests an increase in the peak heat flux between 2% and 15% for the lowest (0.420 MW/m(2)) and highest (0.480 MW/m(2)) hydrogen containing syngas over methane (0.415 MW/m(2)) fueled operation. Experimental investigations indicate the engine cooling load for syngas fueled engine is higher by about 7% and 12% as compared to methane fueled operation; the losses are seen to increase with increasing mixture hydrogen fraction. Increase in the gas to electricity efficiency is observed from 18% to 24% as the mixture hydrogen fraction increases from 7.1% to 9.5%. Further increase in mixture hydrogen fraction to 14.2% results in the reduction of efficiency to 23%; argued due to the changes in the initial and terminal stages of combustion. On doubling of mixture hydrogen fraction, the flame kernel development and fast burn phase duration decrease by about 7% and 10% respectively and the terminal combustion duration, corresponding to 90%-98% mass burn, increases by about 23%. This increase in combustion duration arises from the cooling of the near wall mixture in the boundary layer attributed to the presence of hydrogen. The enhancement in engine cooling load and subsequent reduction in the brake thermal efficiency with increasing hydrogen fraction is evident from the engine energy balance along with the cumulative heat release profiles. Copyright (C) 2015, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Resumo:
We present estimates of single spin asymmetry (SSA) in the electroproduction of taking into account the transverse momentum dependent (TMD) evolution of the gluon Sivers function and using Color Evaporation Model of charmonium production. We estimate SSA for JLab, HERMES, COMPASS and eRHIC energies using recent parameters for the quark Sivers functions which are fitted using an evolution kernel in which the perturbative part is resummed up to next-to-leading logarithms accuracy. We find that these SSAs are much smaller as compared to our first estimates obtained using DGLAP evolution but are comparable to our estimates obtained using TMD evolution where we had used approximate analytical solution of the TMD evolution equation for the purpose.
Resumo:
Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
Turbulence-transport-chemistry interaction plays a crucial role on the flame surface geometry, local and global reactionrates, and therefore, on the propagation and extinction characteristics of intensely turbulent, premixed flames encountered in LPP gas-turbine combustors. The aim of the present work is to understand these interaction effects on the flame surface annihilation and extinction of lean premixed flames, interacting with near isotropic turbulence. As an example case, lean premixed H-2-air mixture is considered so as to enable inclusion of detailed chemistry effects in Direct Numerical Simulations (DNS). The work is carried out in two phases namely, statistically planar flames and ignition kernel, both interacting with near isotropic turbulence, using the recently proposed Flame Particle Tracking (FPT) technique. Flame particles are surface points residing and commoving with an iso-scalar surface within a premixed flame. Tracking flame particles allows us to study the evolution of propagating surface locations uniquely identified with time. In this work, using DNS and FPT we study the flame speed, reaction rate and transport histories of such flame particles residing on iso-scalar surfaces. An analytical expression for the local displacement flame speed (SO is derived, and the contribution of transport and chemistry on the displacement flame speed is identified. An examination of the results of the planar case leads to a conclusion that the cause of variation in S-d may be attributed to the effects of turbulent transport and heat release rate. In the second phase of this work, the sustenance of an ignition kernel is examined in light of the S-curve. A newly proposed Damkohler number accounting for local turbulent transport and reaction rates is found to explain either the sustenance or otherwise propagation of flame kernels in near isotropic turbulence.
Resumo:
In the present work, the effect of deformation mode (uniaxial compression, rolling and torsion) on the microstructural heterogeneities in a commercial purity Ni is reported. For a given equivalent von Mises strain, samples subjected to torsion have shown higher fraction of high-angle boundaries, kernel average misorientation and recrystallization nuclei when compared to uniaxially compressed and rolled samples. This is attributed to the differences in the slip system activity under different modes of deformation.
Resumo:
Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multipoint' kernels, and study their applications. We study a class of kernels based on Jensen type divergences and show that these can be extended to measure similarity among multiple points. We study tensor flattening methods and develop a multi-point (kernel) spectral clustering (MSC) method. We further emphasize on a special case of the proposed kernels, which is a multi-point extension of the linear (dot-product) kernel and show the existence of cubic time tensor flattening algorithm in this case. Finally, we illustrate the usefulness of our contributions using standard data sets and image segmentation tasks.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
Desiccated coconut industries (DCI) create various intermediates from fresh coconut kernel for cosmetic, pharmaceutical and food industries. The mechanized and non-mechanized DCI process between 10,000 and 100,000 nuts/day to discharge 6-150 m(3) of malodorous waste water leading to a discharge of 2646642 kg chemical oxygen demand (COD) daily. In these units, three main types of waste water streams are coconut kernel water, kernel wash water and virgin oil waste water. The effluent streams contain lipids (1-55 g/l), suspended solids (6-80 g/l) and volatile fatty acids (VFA) at concentrations that are inhibitory to anaerobic bacteria. Coconut water contributes to 20-50 % of the total volume and 50-60 % of the total organic loads and causes higher inhibition of anaerobic bacteria with an initial lag phase of 30 days. The lagooning method of treatment widely adopted failed to appreciably treat the waste water and often led to the accumulation of volatile fatty acids (propionic acid) along with long-chain unsaturated free fatty acids. Biogas generation during biological methane potential (BMP) assay required a 15-day adaptation time, and gas production occurred at low concentrations of coconut water while the other two streams did not appear to be inhibitory. The anaerobic bacteria can mineralize coconut lipids at concentrations of 175 mg/l; however; they are severely inhibited at a lipid level of = 350 mg/g bacterial inoculum. The modified Gompertz model showed a good fit with the BMP data with a simple sigmoid pattern. However, it failed to fit experimental BMP data either possessing a longer lag phase and/or diauxic biogas production suggesting inhibition of anaerobic bacteria.
Resumo:
We address the problem of phase retrieval from Fourier transform magnitude spectrum for continuous-time signals that lie in a shift-invariant space spanned by integer shifts of a generator kernel. The phase retrieval problem for such signals is formulated as one of reconstructing the combining coefficients in the shift-invariant basis expansion. We develop sufficient conditions on the coefficients and the bases to guarantee exact phase retrieval, by which we mean reconstruction up to a global phase factor. We present a new class of discrete-domain signals that are not necessarily minimum-phase, but allow for exact phase retrieval from their Fourier magnitude spectra. We also establish Hilbert transform relations between log-magnitude and phase spectra for this class of discrete signals. It turns out that the corresponding continuous-domain counterparts need not satisfy a Hilbert transform relation; notwithstanding, the continuous-domain signals can be reconstructed from their Fourier magnitude spectra. We validate the reconstruction guarantees through simulations for some important classes of signals such as bandlimited signals and piecewise-smooth signals. We also present an application of the proposed phase retrieval technique for artifact-free signal reconstruction in frequency-domain optical-coherence tomography (FDOCT).
Resumo:
In the POSSIBLE WINNER problem in computational social choice theory, we are given a set of partial preferences and the question is whether a distinguished candidate could be made winner by extending the partial preferences to linear preferences. Previous work has provided, for many common voting rules, fixed parameter tractable algorithms for the POSSIBLE WINNER problem, with number of candidates as the parameter. However, the corresponding kernelization question is still open and in fact, has been mentioned as a key research challenge 10]. In this paper, we settle this open question for many common voting rules. We show that the POSSIBLE WINNER problem for maximin, Copeland, Bucklin, ranked pairs, and a class of scoring rules that includes the Borda voting rule does not admit a polynomial kernel with the number of candidates as the parameter. We show however that the COALITIONAL MANIPULATION problem which is an important special case of the POSSIBLE WINNER problem does admit a polynomial kernel for maximin, Copeland, ranked pairs, and a class of scoring rules that includes the Borda voting rule, when the number of manipulators is polynomial in the number of candidates. A significant conclusion of our work is that the POSSIBLE WINNER problem is harder than the COALITIONAL MANIPULATION problem since the COALITIONAL MANIPULATION problem admits a polynomial kernel whereas the POSSIBLE WINNER problem does not admit a polynomial kernel. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm.
Resumo:
The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A common form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires O(sigma(2)) operations per pixel, where sigma is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approximation algorithm that can cut down the computational complexity to O(1) per pixel for any arbitrary sigma (constant-time implementation). This is based on the observation that the range kernel operates via the translations of a fixed Gaussian over the range space, and that these translated Gaussians can be accurately approximated using the so-called Gauss-polynomials. The overall algorithm emerging from this approximation involves a series of spatial Gaussian filtering, which can be efficiently implemented (in parallel) using separability and recursion. We present some preliminary results to demonstrate that the proposed algorithm compares favorably with some of the existing fast algorithms in terms of speed and accuracy.