884 resultados para T-Kernel
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
In Incompressible Smooth Particle Hydrodynamics (ISPH), a pressure Poisson equation (PPE) is solved to obtain a divergence free velocity field. When free surfaces are simulated using this method a Dirichlet boundary condition for pressure at the free surface has to be applied. In existing ISPH methods this is achieved by identifying free surface particles using heuristically chosen threshold of a parameter such as kernel sum, density or divergence of the position, and explicitly setting their pressure values. This often leads to clumping of particles near the free surface and spraying off of surface particles during splashes. Moreover, surface pressure gradients in flows where surface tension is important are not captured well using this approach. We propose a more accurate semi-analytical approach to impose Dirichlet boundary conditions on the free surface. We show the efficacy of the proposed algorithm by using test cases of elongation of a droplet and dam break. We perform two dimensional simulations of water entry and validate the proposed algorithm with experimental results. Further, a three dimensional simulation of droplet splash is shown to compare well with the Volume-of-Fluid simulations. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This work is a follow up to 2, FUN 2010], which initiated a detailed analysis of the popular game of UNO (R). We consider the solitaire version of the game, which was shown to be NP-complete. In 2], the authors also demonstrate a (O)(n)(c(2)) algorithm, where c is the number of colors across all the cards, which implies, in particular that the problem is polynomial time when the number of colors is a constant. In this work, we propose a kernelization algorithm, a consequence of which is that the problem is fixed-parameter tractable when the number of colors is treated as a parameter. This removes the exponential dependence on c and answers the question stated in 2] in the affirmative. We also introduce a natural and possibly more challenging version of UNO that we call ``All Or None UNO''. For this variant, we prove that even the single-player version is NP-complete, and we show a single-exponential FPT algorithm, along with a cubic kernel.
Resumo:
We investigate the parameterized complexity of the following edge coloring problem motivated by the problem of channel assignment in wireless networks. For an integer q >= 2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. This problem is NP-hard for q >= 2, and has been well-studied from the point of view of approximation. Our main focus is the case when q = 2, which is already theoretically intricate and practically relevant. We show fixed-parameter tractable algorithms for both the standard and the dual parameter, and for the latter problem, the result is based on a linear vertex kernel.
Resumo:
We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.
Resumo:
We compute the logarithmic correction to black hole entropy about exponentially suppressed saddle points of the Quantum Entropy Function corresponding to Z(N) orbifolds of the near horizon geometry of the extremal black hole under study. By carefully accounting for zero mode contributions we show that the logarithmic contributions for quarter-BPS black holes in N = 4 supergravity and one-eighth BPS black holes in N = 8 supergravity perfectly match with the prediction from the microstate counting. We also find that the logarithmic contribution for half-BPS black holes in N = 2 supergravity depends non-trivially on the Z(N) orbifold. Our analysis draws heavily on the results we had previously obtained for heat kernel coefficients on Z(N) orbifolds of spheres and hyperboloids in arXiv:1311.6286 and we also propose a generalization of the Plancherel formula to Z(N) orbifolds of hyperboloids to an expression involving the Harish-Chandra character of sl (2, R), a result which is of possible mathematical interest.
Resumo:
The paper presents a multiscale method for crack propagation. The coarse region is modelled by the differential reproducing kernel particle method. Fracture in the coarse scale region is modelled with the Phantom node method. A molecular statics approach is employed in the fine scale where crack propagation is modelled naturally by breaking of bonds. The triangular lattice corresponds to the lattice structure of the (111) plane of an FCC crystal in the fine scale region. The Lennard-Jones potential is used to model the atom-atom interactions. The coupling between the coarse scale and fine scale is realized through ghost atoms. The ghost atom positions are interpolated from the coarse scale solution and enforced as boundary conditions on the fine scale. The fine scale region is adaptively refined and coarsened as the crack propagates. The centro symmetry parameter is used to detect the crack tip location. The method is implemented in two dimensions. The results are compared to pure atomistic simulations and show excellent agreement. (C) 2014 Elsevier B. V. All rights reserved.
Resumo:
In this article we deal with a variation of a theorem of Mauceri concerning the L-P boundedness of operators M which are known to be bounded on L-2. We obtain sufficient conditions on the kernel of the operator M so that it satisfies weighted L-P estimates. As an application we prove L-P boundedness of Hermite pseudo-multipliers. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
We present estimates of single spin asymmetry in the electroproduction of J/psi taking into account the transverse momentum-dependent (TMD) evolution of the gluon Sivers function. We estimate single spin asymmetry for JLab, HERMES, COMPASS and eRHIC energies using the color evaporation model of J/psi. We have calculated the asymmetry using recent parameters extracted by Echevarria et al. using the Collins-Soper-Sterman approach to TMD evolution. These recent TMD evolution fits are based on the evolution kernel in which the perturbative part is resummed up to next-to-leading logarithmic accuracy. We have also estimated the asymmetry by using parameters which had been obtained by a fit by Anselmino et al., using both an exact numerical and an approximate analytical solution of the TMD evolution equations. We find that the variation among the different estimates obtained using TMD evolution is much smaller than between these on one hand and the estimates obtained using DGLAP evolution on the other. Even though the use of TMD evolution causes an overall reduction in asymmetries compared to the ones obtained without it, they remain sizable. Overall, upon use of TMD evolution, predictions for asymmetries stabilize.
Resumo:
We develop new techniques to efficiently evaluate heat kernel coefficients for the Laplacian in the short-time expansion on spheres and hyperboloids with conical singularities. We then apply these techniques to explicitly compute the logarithmic contribution to black hole entropy from an N = 4 vector multiplet about a Z(N) orbifold of the near-horizon geometry of quarter-BPS black holes in N = 4 supergravity. We find that this vanishes, matching perfectly with the prediction from the microstate counting. We also discuss possible generalisations of our heat kernel results to higher-spin fields over ZN orbifolds of higher-dimensional spheres and hyperboloids.
Resumo:
In this paper we present one of the first high-speed particle image velocimetry measurements to quantify flame-turbulence interaction in centrally-ignited constant-pressure premixed flames expanding in nearisotropic turbulence. Measurements of mean flow velocity and rms of fluctuating flow velocity are provided over a range of conditions both in the presence and absence of the flame. The distributions of stretch rate contributions from different terms such as tangential straining, normal straining and curvature are also provided. It is found that the normal straining displays non-Gaussian pdf tails whereas the tangential straining shows near Gaussian behavior. We have further tracked the motion of the edge points that reside and co-move with the edge of the flame kernel during its evolution in time, and found that within the measurement conditions, on average the persistence time scales of stretch due to pure curvature exceed that due to tangential straining by at least a factor of two. (C) 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
The goal of this work is to reduce the cost of computing the coefficients in the Karhunen-Loeve (KL) expansion. The KL expansion serves as a useful and efficient tool for discretizing second-order stochastic processes with known covariance function. Its applications in engineering mechanics include discretizing random field models for elastic moduli, fluid properties, and structural response. The main computational cost of finding the coefficients of this expansion arises from numerically solving an integral eigenvalue problem with the covariance function as the integration kernel. Mathematically this is a homogeneous Fredholm equation of second type. One widely used method for solving this integral eigenvalue problem is to use finite element (FE) bases for discretizing the eigenfunctions, followed by a Galerkin projection. This method is computationally expensive. In the current work it is first shown that the shape of the physical domain in a random field does not affect the realizations of the field estimated using KL expansion, although the individual KL terms are affected. Based on this domain independence property, a numerical integration based scheme accompanied by a modification of the domain, is proposed. In addition to presenting mathematical arguments to establish the domain independence, numerical studies are also conducted to demonstrate and test the proposed method. Numerically it is demonstrated that compared to the Galerkin method the computational speed gain in the proposed method is of three to four orders of magnitude for a two dimensional example, and of one to two orders of magnitude for a three dimensional example, while retaining the same level of accuracy. It is also shown that for separable covariance kernels a further cost reduction of three to four orders of magnitude can be achieved. Both normal and lognormal fields are considered in the numerical studies. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
Let C be a smooth irreducible projective curve of genus g and L a line bundle of degree d generated by a linear subspace V of H-0 (L) of dimension n+1. We prove a conjecture of D. C. Butler on the semistability of the kernel of the evaluation map V circle times O-C -> L and obtain new results on the stability of this kernel. The natural context for this problem is the theory of coherent systems on curves and our techniques involve wall crossing formulae in this theory.
Resumo:
QR decomposition (QRD) is a widely used Numerical Linear Algebra (NLA) kernel with applications ranging from SONAR beamforming to wireless MIMO receivers. In this paper, we propose a novel Givens Rotation (GR) based QRD (GR QRD) where we reduce the computational complexity of GR and exploit higher degree of parallelism. This low complexity Column-wise GR (CGR) can annihilate multiple elements of a column of a matrix simultaneously. The algorithm is first realized on a Two-Dimensional (2 D) systolic array and then implemented on REDEFINE which is a Coarse Grained run-time Reconfigurable Architecture (CGRA). We benchmark the proposed implementation against state-of-the-art implementations to report better throughput, convergence and scalability.
Resumo:
The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.