971 resultados para Homogeneous Kernels
Resumo:
We develop new techniques to efficiently evaluate heat kernel coefficients for the Laplacian in the short-time expansion on spheres and hyperboloids with conical singularities. We then apply these techniques to explicitly compute the logarithmic contribution to black hole entropy from an N = 4 vector multiplet about a Z(N) orbifold of the near-horizon geometry of quarter-BPS black holes in N = 4 supergravity. We find that this vanishes, matching perfectly with the prediction from the microstate counting. We also discuss possible generalisations of our heat kernel results to higher-spin fields over ZN orbifolds of higher-dimensional spheres and hyperboloids.
Resumo:
The goal of this work is to reduce the cost of computing the coefficients in the Karhunen-Loeve (KL) expansion. The KL expansion serves as a useful and efficient tool for discretizing second-order stochastic processes with known covariance function. Its applications in engineering mechanics include discretizing random field models for elastic moduli, fluid properties, and structural response. The main computational cost of finding the coefficients of this expansion arises from numerically solving an integral eigenvalue problem with the covariance function as the integration kernel. Mathematically this is a homogeneous Fredholm equation of second type. One widely used method for solving this integral eigenvalue problem is to use finite element (FE) bases for discretizing the eigenfunctions, followed by a Galerkin projection. This method is computationally expensive. In the current work it is first shown that the shape of the physical domain in a random field does not affect the realizations of the field estimated using KL expansion, although the individual KL terms are affected. Based on this domain independence property, a numerical integration based scheme accompanied by a modification of the domain, is proposed. In addition to presenting mathematical arguments to establish the domain independence, numerical studies are also conducted to demonstrate and test the proposed method. Numerically it is demonstrated that compared to the Galerkin method the computational speed gain in the proposed method is of three to four orders of magnitude for a two dimensional example, and of one to two orders of magnitude for a three dimensional example, while retaining the same level of accuracy. It is also shown that for separable covariance kernels a further cost reduction of three to four orders of magnitude can be achieved. Both normal and lognormal fields are considered in the numerical studies. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
We carry out an extensive and high-resolution direct numerical simulation of homogeneous, isotropic turbulence in two-dimensional fluid films with air-drag-induced friction and with polymer additives. Our study reveals that the polymers (a) reduce the total fluid energy, enstrophy, and palinstrophy; (b) modify the fluid energy spectrum in both inverse-and forward-cascade regimes; (c) reduce small-scale intermittency; (d) suppress regions of high vorticity and strain rate; and (e) stretch in strain-dominated regions. We compare our results with earlier experimental studies and propose new experiments.
Resumo:
Coarse Grained Reconfigurable Architectures (CGRA) are emerging as embedded application processing units in computing platforms for Exascale computing. Such CGRAs are distributed memory multi- core compute elements on a chip that communicate over a Network-on-chip (NoC). Numerical Linear Algebra (NLA) kernels are key to several high performance computing applications. In this paper we propose a systematic methodology to obtain the specification of Compute Elements (CE) for such CGRAs. We analyze block Matrix Multiplication and block LU Decomposition algorithms in the context of a CGRA, and obtain theoretical bounds on communication requirements, and memory sizes for a CE. Support for high performance custom computations common to NLA kernels are met through custom function units (CFUs) in the CEs. We present results to justify the merits of such CFUs.
Resumo:
Turbulence-transport-chemistry interaction plays a crucial role on the flame surface geometry, local and global reactionrates, and therefore, on the propagation and extinction characteristics of intensely turbulent, premixed flames encountered in LPP gas-turbine combustors. The aim of the present work is to understand these interaction effects on the flame surface annihilation and extinction of lean premixed flames, interacting with near isotropic turbulence. As an example case, lean premixed H-2-air mixture is considered so as to enable inclusion of detailed chemistry effects in Direct Numerical Simulations (DNS). The work is carried out in two phases namely, statistically planar flames and ignition kernel, both interacting with near isotropic turbulence, using the recently proposed Flame Particle Tracking (FPT) technique. Flame particles are surface points residing and commoving with an iso-scalar surface within a premixed flame. Tracking flame particles allows us to study the evolution of propagating surface locations uniquely identified with time. In this work, using DNS and FPT we study the flame speed, reaction rate and transport histories of such flame particles residing on iso-scalar surfaces. An analytical expression for the local displacement flame speed (SO is derived, and the contribution of transport and chemistry on the displacement flame speed is identified. An examination of the results of the planar case leads to a conclusion that the cause of variation in S-d may be attributed to the effects of turbulent transport and heat release rate. In the second phase of this work, the sustenance of an ignition kernel is examined in light of the S-curve. A newly proposed Damkohler number accounting for local turbulent transport and reaction rates is found to explain either the sustenance or otherwise propagation of flame kernels in near isotropic turbulence.
Resumo:
We present the first direct-numerical-simulation study of the statistical properties of two-dimensional superfluid turbulence in the simplified, Hall-Vinen-Bekharevich-Khalatnikov two-fluid model. We show that both normalfluid and superfluid energy spectra can exhibit two power-law regimes, the first associated with an inverse cascade of energy and the second with the forward cascade of enstrophy. We quantify the mutual-friction-induced alignment of normal and superfluid velocities by obtaining probability distribution functions of the angle between them and the ratio of their moduli.
Resumo:
The 3-Hitting Set problem involves a family of subsets F of size at most three over an universe U. The goal is to find a subset of U of the smallest possible size that intersects every set in F. The version of the problem with parity constraints asks for a subset S of size at most k that, in addition to being a hitting set, also satisfies certain parity constraints on the sizes of the intersections of S with each set in the family F. In particular, an odd (even) set is a hitting set that hits every set at either one or three (two) elements, and a perfect code is a hitting set that intersects every set at exactly one element. These questions are of fundamental interest in many contexts for general set systems. Just as for Hitting Set, we find these questions to be interesting for the case of families consisting of sets of size at most three. In this work, we initiate an algorithmic study of these problems in this special case, focusing on a parameterized analysis. We show, for each problem, efficient fixed-parameter tractable algorithms using search trees that are tailor-made to the constraints in question, and also polynomial kernels using sunflower-like arguments in a manner that accounts for equivalence under the additional parity constraints.
Resumo:
Motivated by multi-distribution divergences, which originate in information theory, we propose a notion of `multipoint' kernels, and study their applications. We study a class of kernels based on Jensen type divergences and show that these can be extended to measure similarity among multiple points. We study tensor flattening methods and develop a multi-point (kernel) spectral clustering (MSC) method. We further emphasize on a special case of the proposed kernels, which is a multi-point extension of the linear (dot-product) kernel and show the existence of cubic time tensor flattening algorithm in this case. Finally, we illustrate the usefulness of our contributions using standard data sets and image segmentation tasks.
Resumo:
We show here a 2(Omega(root d.log N)) size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree d on N variables (with N = d(3) in our case) with 0, 1-coefficients such that for any representation of a polynomial f in this family of the form f = Sigma(i) Pi(j) Q(ij), where the Q(ij)'s are homogeneous polynomials (recall that a polynomial is said to be homogeneous if all its monomials have the same degree), it must hold that Sigma(i,j) (Number of monomials of Q(ij)) >= 2(Omega(root d.log N)). The above mentioned family, which we refer to as the Nisan-Wigderson design-based family of polynomials, is in the complexity class VNP. Our work builds on the recent lower bound results 1], 2], 3], 4], 5] and yields an improved quantitative bound as compared to the quasi-polynomial lower bound of 6] and the N-Omega(log log (N)) lower bound in the independent work of 7].
Resumo:
Homogeneous temperature regions are necessary for use in hydrometeorological studies. The regions are often delineated by analysing statistics derived from time series of maximum, minimum or mean temperature, rather than attributes influencing temperature. This practice cannot yield meaningful regions in data-sparse areas. Further, independent validation of the delineated regions for homogeneity in temperature is not possible, as temperature records form the basis to arrive at the regions. To address these issues, a two-stage clustering approach is proposed in this study to delineate homogeneous temperature regions. First stage of the approach involves (1) determining correlation structure between observed temperature over the study area and possible predictors (large-scale atmospheric variables) influencing the temperature and (2) using the correlation structure as the basis to delineate sites in the study area into clusters. Second stage of the approach involves analysis on each of the clusters to (1) identify potential predictors (large-scale atmospheric variables) influencing temperature at sites in the cluster and (2) partition the cluster into homogeneous fuzzy temperature regions using the identified potential predictors. Application of the proposed approach to India yielded 28 homogeneous regions that were demonstrated to be effective when compared to an alternate set of 6 regions that were previously delineated over the study area. Intersite cross-correlations of monthly maximum and minimum temperatures in the existing regions were found to be weak and negative for several months, which is undesirable. This problem was not found in the case of regions delineated using the proposed approach. Utility of the proposed regions in arriving at estimates of potential evapotranspiration for ungauged locations in the study area is demonstrated.
Resumo:
Identification of homogeneous hydrometeorological regions (HMRs) is necessary for various applications. Such regions are delineated by various approaches considering rainfall and temperature as two key variables. In conventional approaches, formation of regions is based on principal components (PCs)/statistics/indices determined from time series of the key variables at monthly and seasonal scales. An issue with use of PCs for regionalization is that they have to be extracted from contemporaneous records of hydrometeorological variables. Therefore, delineated regions may not be effective when the available records are limited over contemporaneous time period. A drawback associated with the use of statistics/indices is that they do not provide effective representation of the key variables when the records exhibit non-stationarity. Consequently, the resulting regions may not be effective for the desired purpose. To address these issues, a new approach is proposed in this article. The approach considers information extracted from wavelet transformations of the observed multivariate hydrometeorological time series as the basis for regionalization by global fuzzy c-means clustering procedure. The approach can account for dynamic variability in the time series and its non-stationarity (if any). Effectiveness of the proposed approach in forming HMRs is demonstrated by application to India, as there are no prior attempts to form such regions over the country. Drought severity-area-frequency (SAF) curves are constructed corresponding to each of the newly formed regions for the use in regional drought analysis, by considering standardized precipitation evapotranspiration index (SPEI) as the drought indicator.
Resumo:
There has been much interest in understanding collective dynamics in networks of brain regions due to their role in behavior and cognitive function. Here we show that a simple, homogeneous system of densely connected oscillators, representing the aggregate activity of local brain regions, can exhibit a rich variety of dynamical patterns emerging via spontaneous breaking of permutation or translational symmetries. Upon removing just a few connections, we observe a striking departure from the mean-field limit in terms of the collective dynamics, which implies that the sparsity of these networks may have very important consequences. Our results suggest that the origins of some of the complicated activity patterns seen in the brain may be understood even with simple connection topologies.
Resumo:
It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm.
Resumo:
It is known that all the vector bundles of the title can be obtained by holomorphic induction from representations of a certain parabolic group on finite-dimensional inner product spaces. The representations, and the induced bundles, have composition series with irreducible factors. We write down an equivariant constant coefficient differential operator that intertwines the bundle with the direct sum of its irreducible factors. As an application, we show that in the case of the closed unit ball in C-n all homogeneous n-tuples of Cowen-Douglas operators are similar to direct sums of certain basic n-tuples. (c) 2015 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.
Resumo:
The effects of curvature and wrinkling on the growth of turbulent premixed flame kernels were studied using both two-dimensional OH Planar Laser-Induced Fluorescence (PLIF) and three-dimensional Direct Numerical Simulation (DNS). Comparisons of results between the two approaches showed a high level of agreement, providing confidence in the simplified chemistry treatment employed in the DNS, and indicating that chemistry might have only a limited influence on the evolution of the freely propagating flame. The usefulness of PLIF in providing data over a wide parameter range was illustrated using statistics obtained from both CH4/air and H2/air mixtures, which show markedly different behavior due to their different thermo-diffusive properties. The results provided a demonstration of the combined power of PLIF and DNS for flame investigation. Each technique compensate for the weaknesses of the other, and to reinforce the strengths of both.