884 resultados para Tridiagonal Kernel
Resumo:
We develop new techniques to efficiently evaluate heat kernel coefficients for the Laplacian in the short-time expansion on spheres and hyperboloids with conical singularities. We then apply these techniques to explicitly compute the logarithmic contribution to black hole entropy from an N = 4 vector multiplet about a Z(N) orbifold of the near-horizon geometry of quarter-BPS black holes in N = 4 supergravity. We find that this vanishes, matching perfectly with the prediction from the microstate counting. We also discuss possible generalisations of our heat kernel results to higher-spin fields over ZN orbifolds of higher-dimensional spheres and hyperboloids.
Resumo:
In this paper we present one of the first high-speed particle image velocimetry measurements to quantify flame-turbulence interaction in centrally-ignited constant-pressure premixed flames expanding in nearisotropic turbulence. Measurements of mean flow velocity and rms of fluctuating flow velocity are provided over a range of conditions both in the presence and absence of the flame. The distributions of stretch rate contributions from different terms such as tangential straining, normal straining and curvature are also provided. It is found that the normal straining displays non-Gaussian pdf tails whereas the tangential straining shows near Gaussian behavior. We have further tracked the motion of the edge points that reside and co-move with the edge of the flame kernel during its evolution in time, and found that within the measurement conditions, on average the persistence time scales of stretch due to pure curvature exceed that due to tangential straining by at least a factor of two. (C) 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
The goal of this work is to reduce the cost of computing the coefficients in the Karhunen-Loeve (KL) expansion. The KL expansion serves as a useful and efficient tool for discretizing second-order stochastic processes with known covariance function. Its applications in engineering mechanics include discretizing random field models for elastic moduli, fluid properties, and structural response. The main computational cost of finding the coefficients of this expansion arises from numerically solving an integral eigenvalue problem with the covariance function as the integration kernel. Mathematically this is a homogeneous Fredholm equation of second type. One widely used method for solving this integral eigenvalue problem is to use finite element (FE) bases for discretizing the eigenfunctions, followed by a Galerkin projection. This method is computationally expensive. In the current work it is first shown that the shape of the physical domain in a random field does not affect the realizations of the field estimated using KL expansion, although the individual KL terms are affected. Based on this domain independence property, a numerical integration based scheme accompanied by a modification of the domain, is proposed. In addition to presenting mathematical arguments to establish the domain independence, numerical studies are also conducted to demonstrate and test the proposed method. Numerically it is demonstrated that compared to the Galerkin method the computational speed gain in the proposed method is of three to four orders of magnitude for a two dimensional example, and of one to two orders of magnitude for a three dimensional example, while retaining the same level of accuracy. It is also shown that for separable covariance kernels a further cost reduction of three to four orders of magnitude can be achieved. Both normal and lognormal fields are considered in the numerical studies. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
Let C be a smooth irreducible projective curve of genus g and L a line bundle of degree d generated by a linear subspace V of H-0 (L) of dimension n+1. We prove a conjecture of D. C. Butler on the semistability of the kernel of the evaluation map V circle times O-C -> L and obtain new results on the stability of this kernel. The natural context for this problem is the theory of coherent systems on curves and our techniques involve wall crossing formulae in this theory.
Resumo:
QR decomposition (QRD) is a widely used Numerical Linear Algebra (NLA) kernel with applications ranging from SONAR beamforming to wireless MIMO receivers. In this paper, we propose a novel Givens Rotation (GR) based QRD (GR QRD) where we reduce the computational complexity of GR and exploit higher degree of parallelism. This low complexity Column-wise GR (CGR) can annihilate multiple elements of a column of a matrix simultaneously. The algorithm is first realized on a Two-Dimensional (2 D) systolic array and then implemented on REDEFINE which is a Coarse Grained run-time Reconfigurable Architecture (CGRA). We benchmark the proposed implementation against state-of-the-art implementations to report better throughput, convergence and scalability.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.
Resumo:
We study the problem of finding small s-t separators that induce graphs having certain properties. It is known that finding a minimum clique s-t separator is polynomial-time solvable (Tarjan in Discrete Math. 55:221-232, 1985), while for example the problems of finding a minimum s-t separator that induces a connected graph or forms an independent set are fixed-parameter tractable when parameterized by the size of the separator (Marx et al. in ACM Trans. Algorithms 9(4): 30, 2013). Motivated by these results, we study properties that generalize cliques, independent sets, and connected graphs, and determine the complexity of finding separators satisfying these properties. We investigate these problems also on bounded-degree graphs. Our results are as follows: Finding a minimum c-connected s-t separator is FPT for c=2 and W1]-hard for any ca parts per thousand yen3. Finding a minimum s-t separator with diameter at most d is W1]-hard for any da parts per thousand yen2. Finding a minimum r-regular s-t separator is W1]-hard for any ra parts per thousand yen1. For any decidable graph property, finding a minimum s-t separator with this property is FPT parameterized jointly by the size of the separator and the maximum degree. Finding a connected s-t separator of minimum size does not have a polynomial kernel, even when restricted to graphs of maximum degree at most 3, unless .
Resumo:
Hydrogen, either in pure form or as a gaseous fuel mixture specie enhances the fuel conversion efficiency and reduce emissions in an internal combustion engine. This is due to the reduction in combustion duration attributed to higher laminar flame speeds. Hydrogen is also expected to increase the engine convective heat flux, attributed (directly or indirectly) to parameters like higher adiabatic flame temperature, laminar flame speed, thermal conductivity and diffusivity and lower flame quenching distance. These factors (adversely) affect the thermo-kinematic response and offset some of the benefits. The current work addresses the influence of mixture hydrogen fraction in syngas on the engine energy balance and the thermo-kinematic response for close to stoichiometric operating conditions. Four different bio-derived syngas compositions with fuel calorific value varying from 3.14 MJ/kg to 7.55 MJ/kg and air fuel mixture hydrogen fraction varying from 7.1% to 14.2% by volume are used. The analysis comprises of (a) use of chemical kinetics simulation package CHEMKIN for quantifying the thermo-physical properties (b) 0-D model for engine in-cylinder analysis and (c) in-cylinder investigations on a two-cylinder engine in open loop cooling mode for quantifying the thermo-kinematic response and engine energy balance. With lower adiabatic flame temperature for Syngas, the in-cylinder heat transfer analysis suggests that temperature has little effect in terms of increasing the heat flux. For typical engine like conditions (700 K and 25 bar at CR of 10), the laminar flame speed for syngas exceeds that of methane (55.5 cm/s) beyond mixture hydrogen fraction of 11% and is attributed to the increase in H based radicals. This leads to a reduction in the effective Lewis number and laminar flame thickness, potentially inducing flame instability and cellularity. Use of a thermodynamic model to assess the isolated influence of thermal conductivity and diffusivity on heat flux suggests an increase in the peak heat flux between 2% and 15% for the lowest (0.420 MW/m(2)) and highest (0.480 MW/m(2)) hydrogen containing syngas over methane (0.415 MW/m(2)) fueled operation. Experimental investigations indicate the engine cooling load for syngas fueled engine is higher by about 7% and 12% as compared to methane fueled operation; the losses are seen to increase with increasing mixture hydrogen fraction. Increase in the gas to electricity efficiency is observed from 18% to 24% as the mixture hydrogen fraction increases from 7.1% to 9.5%. Further increase in mixture hydrogen fraction to 14.2% results in the reduction of efficiency to 23%; argued due to the changes in the initial and terminal stages of combustion. On doubling of mixture hydrogen fraction, the flame kernel development and fast burn phase duration decrease by about 7% and 10% respectively and the terminal combustion duration, corresponding to 90%-98% mass burn, increases by about 23%. This increase in combustion duration arises from the cooling of the near wall mixture in the boundary layer attributed to the presence of hydrogen. The enhancement in engine cooling load and subsequent reduction in the brake thermal efficiency with increasing hydrogen fraction is evident from the engine energy balance along with the cumulative heat release profiles. Copyright (C) 2015, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Resumo:
We present estimates of single spin asymmetry (SSA) in the electroproduction of taking into account the transverse momentum dependent (TMD) evolution of the gluon Sivers function and using Color Evaporation Model of charmonium production. We estimate SSA for JLab, HERMES, COMPASS and eRHIC energies using recent parameters for the quark Sivers functions which are fitted using an evolution kernel in which the perturbative part is resummed up to next-to-leading logarithms accuracy. We find that these SSAs are much smaller as compared to our first estimates obtained using DGLAP evolution but are comparable to our estimates obtained using TMD evolution where we had used approximate analytical solution of the TMD evolution equation for the purpose.
Resumo:
Images obtained through fluorescence microscopy at low numerical aperture (NA) are noisy and have poor resolution. Images of specimens such as F-actin filaments obtained using confocal or widefield fluorescence microscopes contain directional information and it is important that an image smoothing or filtering technique preserve the directionality. F-actin filaments are widely studied in pathology because the abnormalities in actin dynamics play a key role in diagnosis of cancer, cardiac diseases, vascular diseases, myofibrillar myopathies, neurological disorders, etc. We develop the directional bilateral filter as a means of filtering out the noise in the image without significantly altering the directionality of the F-actin filaments. The bilateral filter is anisotropic to start with, but we add an additional degree of anisotropy by employing an oriented domain kernel for smoothing. The orientation is locally adapted using a structure tensor and the parameters of the bilateral filter are optimized for within the framework of statistical risk minimization. We show that the directional bilateral filter has better denoising performance than the traditional Gaussian bilateral filter and other denoising techniques such as SURE-LET, non-local means, and guided image filtering at various noise levels in terms of peak signal-to-noise ratio (PSNR). We also show quantitative improvements in low NA images of F-actin filaments. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.
Resumo:
Turbulence-transport-chemistry interaction plays a crucial role on the flame surface geometry, local and global reactionrates, and therefore, on the propagation and extinction characteristics of intensely turbulent, premixed flames encountered in LPP gas-turbine combustors. The aim of the present work is to understand these interaction effects on the flame surface annihilation and extinction of lean premixed flames, interacting with near isotropic turbulence. As an example case, lean premixed H-2-air mixture is considered so as to enable inclusion of detailed chemistry effects in Direct Numerical Simulations (DNS). The work is carried out in two phases namely, statistically planar flames and ignition kernel, both interacting with near isotropic turbulence, using the recently proposed Flame Particle Tracking (FPT) technique. Flame particles are surface points residing and commoving with an iso-scalar surface within a premixed flame. Tracking flame particles allows us to study the evolution of propagating surface locations uniquely identified with time. In this work, using DNS and FPT we study the flame speed, reaction rate and transport histories of such flame particles residing on iso-scalar surfaces. An analytical expression for the local displacement flame speed (SO is derived, and the contribution of transport and chemistry on the displacement flame speed is identified. An examination of the results of the planar case leads to a conclusion that the cause of variation in S-d may be attributed to the effects of turbulent transport and heat release rate. In the second phase of this work, the sustenance of an ignition kernel is examined in light of the S-curve. A newly proposed Damkohler number accounting for local turbulent transport and reaction rates is found to explain either the sustenance or otherwise propagation of flame kernels in near isotropic turbulence.
Resumo:
In the present work, the effect of deformation mode (uniaxial compression, rolling and torsion) on the microstructural heterogeneities in a commercial purity Ni is reported. For a given equivalent von Mises strain, samples subjected to torsion have shown higher fraction of high-angle boundaries, kernel average misorientation and recrystallization nuclei when compared to uniaxially compressed and rolled samples. This is attributed to the differences in the slip system activity under different modes of deformation.
Resumo:
Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multipoint' kernels, and study their applications. We study a class of kernels based on Jensen type divergences and show that these can be extended to measure similarity among multiple points. We study tensor flattening methods and develop a multi-point (kernel) spectral clustering (MSC) method. We further emphasize on a special case of the proposed kernels, which is a multi-point extension of the linear (dot-product) kernel and show the existence of cubic time tensor flattening algorithm in this case. Finally, we illustrate the usefulness of our contributions using standard data sets and image segmentation tasks.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
Desiccated coconut industries (DCI) create various intermediates from fresh coconut kernel for cosmetic, pharmaceutical and food industries. The mechanized and non-mechanized DCI process between 10,000 and 100,000 nuts/day to discharge 6-150 m(3) of malodorous waste water leading to a discharge of 2646642 kg chemical oxygen demand (COD) daily. In these units, three main types of waste water streams are coconut kernel water, kernel wash water and virgin oil waste water. The effluent streams contain lipids (1-55 g/l), suspended solids (6-80 g/l) and volatile fatty acids (VFA) at concentrations that are inhibitory to anaerobic bacteria. Coconut water contributes to 20-50 % of the total volume and 50-60 % of the total organic loads and causes higher inhibition of anaerobic bacteria with an initial lag phase of 30 days. The lagooning method of treatment widely adopted failed to appreciably treat the waste water and often led to the accumulation of volatile fatty acids (propionic acid) along with long-chain unsaturated free fatty acids. Biogas generation during biological methane potential (BMP) assay required a 15-day adaptation time, and gas production occurred at low concentrations of coconut water while the other two streams did not appear to be inhibitory. The anaerobic bacteria can mineralize coconut lipids at concentrations of 175 mg/l; however; they are severely inhibited at a lipid level of = 350 mg/g bacterial inoculum. The modified Gompertz model showed a good fit with the BMP data with a simple sigmoid pattern. However, it failed to fit experimental BMP data either possessing a longer lag phase and/or diauxic biogas production suggesting inhibition of anaerobic bacteria.