866 resultados para Reproducing kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the parameterized complexity of the following edge coloring problem motivated by the problem of channel assignment in wireless networks. For an integer q >= 2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. This problem is NP-hard for q >= 2, and has been well-studied from the point of view of approximation. Our main focus is the case when q = 2, which is already theoretically intricate and practically relevant. We show fixed-parameter tractable algorithms for both the standard and the dual parameter, and for the latter problem, the result is based on a linear vertex kernel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We compute the logarithmic correction to black hole entropy about exponentially suppressed saddle points of the Quantum Entropy Function corresponding to Z(N) orbifolds of the near horizon geometry of the extremal black hole under study. By carefully accounting for zero mode contributions we show that the logarithmic contributions for quarter-BPS black holes in N = 4 supergravity and one-eighth BPS black holes in N = 8 supergravity perfectly match with the prediction from the microstate counting. We also find that the logarithmic contribution for half-BPS black holes in N = 2 supergravity depends non-trivially on the Z(N) orbifold. Our analysis draws heavily on the results we had previously obtained for heat kernel coefficients on Z(N) orbifolds of spheres and hyperboloids in arXiv:1311.6286 and we also propose a generalization of the Plancherel formula to Z(N) orbifolds of hyperboloids to an expression involving the Harish-Chandra character of sl (2, R), a result which is of possible mathematical interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article we deal with a variation of a theorem of Mauceri concerning the L-P boundedness of operators M which are known to be bounded on L-2. We obtain sufficient conditions on the kernel of the operator M so that it satisfies weighted L-P estimates. As an application we prove L-P boundedness of Hermite pseudo-multipliers. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present estimates of single spin asymmetry in the electroproduction of J/psi taking into account the transverse momentum-dependent (TMD) evolution of the gluon Sivers function. We estimate single spin asymmetry for JLab, HERMES, COMPASS and eRHIC energies using the color evaporation model of J/psi. We have calculated the asymmetry using recent parameters extracted by Echevarria et al. using the Collins-Soper-Sterman approach to TMD evolution. These recent TMD evolution fits are based on the evolution kernel in which the perturbative part is resummed up to next-to-leading logarithmic accuracy. We have also estimated the asymmetry by using parameters which had been obtained by a fit by Anselmino et al., using both an exact numerical and an approximate analytical solution of the TMD evolution equations. We find that the variation among the different estimates obtained using TMD evolution is much smaller than between these on one hand and the estimates obtained using DGLAP evolution on the other. Even though the use of TMD evolution causes an overall reduction in asymmetries compared to the ones obtained without it, they remain sizable. Overall, upon use of TMD evolution, predictions for asymmetries stabilize.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop new techniques to efficiently evaluate heat kernel coefficients for the Laplacian in the short-time expansion on spheres and hyperboloids with conical singularities. We then apply these techniques to explicitly compute the logarithmic contribution to black hole entropy from an N = 4 vector multiplet about a Z(N) orbifold of the near-horizon geometry of quarter-BPS black holes in N = 4 supergravity. We find that this vanishes, matching perfectly with the prediction from the microstate counting. We also discuss possible generalisations of our heat kernel results to higher-spin fields over ZN orbifolds of higher-dimensional spheres and hyperboloids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present one of the first high-speed particle image velocimetry measurements to quantify flame-turbulence interaction in centrally-ignited constant-pressure premixed flames expanding in nearisotropic turbulence. Measurements of mean flow velocity and rms of fluctuating flow velocity are provided over a range of conditions both in the presence and absence of the flame. The distributions of stretch rate contributions from different terms such as tangential straining, normal straining and curvature are also provided. It is found that the normal straining displays non-Gaussian pdf tails whereas the tangential straining shows near Gaussian behavior. We have further tracked the motion of the edge points that reside and co-move with the edge of the flame kernel during its evolution in time, and found that within the measurement conditions, on average the persistence time scales of stretch due to pure curvature exceed that due to tangential straining by at least a factor of two. (C) 2014 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this work is to reduce the cost of computing the coefficients in the Karhunen-Loeve (KL) expansion. The KL expansion serves as a useful and efficient tool for discretizing second-order stochastic processes with known covariance function. Its applications in engineering mechanics include discretizing random field models for elastic moduli, fluid properties, and structural response. The main computational cost of finding the coefficients of this expansion arises from numerically solving an integral eigenvalue problem with the covariance function as the integration kernel. Mathematically this is a homogeneous Fredholm equation of second type. One widely used method for solving this integral eigenvalue problem is to use finite element (FE) bases for discretizing the eigenfunctions, followed by a Galerkin projection. This method is computationally expensive. In the current work it is first shown that the shape of the physical domain in a random field does not affect the realizations of the field estimated using KL expansion, although the individual KL terms are affected. Based on this domain independence property, a numerical integration based scheme accompanied by a modification of the domain, is proposed. In addition to presenting mathematical arguments to establish the domain independence, numerical studies are also conducted to demonstrate and test the proposed method. Numerically it is demonstrated that compared to the Galerkin method the computational speed gain in the proposed method is of three to four orders of magnitude for a two dimensional example, and of one to two orders of magnitude for a three dimensional example, while retaining the same level of accuracy. It is also shown that for separable covariance kernels a further cost reduction of three to four orders of magnitude can be achieved. Both normal and lognormal fields are considered in the numerical studies. (c) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let C be a smooth irreducible projective curve of genus g and L a line bundle of degree d generated by a linear subspace V of H-0 (L) of dimension n+1. We prove a conjecture of D. C. Butler on the semistability of the kernel of the evaluation map V circle times O-C -> L and obtain new results on the stability of this kernel. The natural context for this problem is the theory of coherent systems on curves and our techniques involve wall crossing formulae in this theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

QR decomposition (QRD) is a widely used Numerical Linear Algebra (NLA) kernel with applications ranging from SONAR beamforming to wireless MIMO receivers. In this paper, we propose a novel Givens Rotation (GR) based QRD (GR QRD) where we reduce the computational complexity of GR and exploit higher degree of parallelism. This low complexity Column-wise GR (CGR) can annihilate multiple elements of a column of a matrix simultaneously. The algorithm is first realized on a Two-Dimensional (2 D) systolic array and then implemented on REDEFINE which is a Coarse Grained run-time Reconfigurable Architecture (CGRA). We benchmark the proposed implementation against state-of-the-art implementations to report better throughput, convergence and scalability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The correctness of a hard real-time system depends its ability to meet all its deadlines. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system (OS). Existing implementations of schedulers in multicore systems that support real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. As a result, the execution overhead of real-time tasks is quite large in these systems, which, in turn, affects their runtime. In order that the hard real-time tasks can be executed in such systems with minimal interference from other Linux tasks, we propose, in this paper, an integrated scheduler architecture, called SchedISA, which aims to considerably reduce the execution overhead of real-time tasks in these systems. In order to test the efficacy of the proposed scheduler, we implemented partitioned earliest deadline first (P-EDF) scheduling algorithm in SchedISA on Linux kernel, version 3.8, and conducted experiments on Intel core i7 processor with eight logical cores. We compared the execution overhead of real-time tasks in the above implementation of SchedISA with that in SCHED_DEADLINE's P-EDF implementation, which concurrently executes real-time and non-real-time tasks in Linux OS in all the cores. The experimental results show that the execution overhead of real-time tasks in the above implementation of SchedISA is considerably less than that in SCHED_DEADLINE. We believe that, with further refinement of SchedISA, the execution overhead of real-time tasks in SchedISA can be reduced to a predictable maximum, making it suitable for scheduling hard real-time tasks without affecting the CPU share of Linux tasks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of finding small s-t separators that induce graphs having certain properties. It is known that finding a minimum clique s-t separator is polynomial-time solvable (Tarjan in Discrete Math. 55:221-232, 1985), while for example the problems of finding a minimum s-t separator that induces a connected graph or forms an independent set are fixed-parameter tractable when parameterized by the size of the separator (Marx et al. in ACM Trans. Algorithms 9(4): 30, 2013). Motivated by these results, we study properties that generalize cliques, independent sets, and connected graphs, and determine the complexity of finding separators satisfying these properties. We investigate these problems also on bounded-degree graphs. Our results are as follows: Finding a minimum c-connected s-t separator is FPT for c=2 and W1]-hard for any ca parts per thousand yen3. Finding a minimum s-t separator with diameter at most d is W1]-hard for any da parts per thousand yen2. Finding a minimum r-regular s-t separator is W1]-hard for any ra parts per thousand yen1. For any decidable graph property, finding a minimum s-t separator with this property is FPT parameterized jointly by the size of the separator and the maximum degree. Finding a connected s-t separator of minimum size does not have a polynomial kernel, even when restricted to graphs of maximum degree at most 3, unless .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogen, either in pure form or as a gaseous fuel mixture specie enhances the fuel conversion efficiency and reduce emissions in an internal combustion engine. This is due to the reduction in combustion duration attributed to higher laminar flame speeds. Hydrogen is also expected to increase the engine convective heat flux, attributed (directly or indirectly) to parameters like higher adiabatic flame temperature, laminar flame speed, thermal conductivity and diffusivity and lower flame quenching distance. These factors (adversely) affect the thermo-kinematic response and offset some of the benefits. The current work addresses the influence of mixture hydrogen fraction in syngas on the engine energy balance and the thermo-kinematic response for close to stoichiometric operating conditions. Four different bio-derived syngas compositions with fuel calorific value varying from 3.14 MJ/kg to 7.55 MJ/kg and air fuel mixture hydrogen fraction varying from 7.1% to 14.2% by volume are used. The analysis comprises of (a) use of chemical kinetics simulation package CHEMKIN for quantifying the thermo-physical properties (b) 0-D model for engine in-cylinder analysis and (c) in-cylinder investigations on a two-cylinder engine in open loop cooling mode for quantifying the thermo-kinematic response and engine energy balance. With lower adiabatic flame temperature for Syngas, the in-cylinder heat transfer analysis suggests that temperature has little effect in terms of increasing the heat flux. For typical engine like conditions (700 K and 25 bar at CR of 10), the laminar flame speed for syngas exceeds that of methane (55.5 cm/s) beyond mixture hydrogen fraction of 11% and is attributed to the increase in H based radicals. This leads to a reduction in the effective Lewis number and laminar flame thickness, potentially inducing flame instability and cellularity. Use of a thermodynamic model to assess the isolated influence of thermal conductivity and diffusivity on heat flux suggests an increase in the peak heat flux between 2% and 15% for the lowest (0.420 MW/m(2)) and highest (0.480 MW/m(2)) hydrogen containing syngas over methane (0.415 MW/m(2)) fueled operation. Experimental investigations indicate the engine cooling load for syngas fueled engine is higher by about 7% and 12% as compared to methane fueled operation; the losses are seen to increase with increasing mixture hydrogen fraction. Increase in the gas to electricity efficiency is observed from 18% to 24% as the mixture hydrogen fraction increases from 7.1% to 9.5%. Further increase in mixture hydrogen fraction to 14.2% results in the reduction of efficiency to 23%; argued due to the changes in the initial and terminal stages of combustion. On doubling of mixture hydrogen fraction, the flame kernel development and fast burn phase duration decrease by about 7% and 10% respectively and the terminal combustion duration, corresponding to 90%-98% mass burn, increases by about 23%. This increase in combustion duration arises from the cooling of the near wall mixture in the boundary layer attributed to the presence of hydrogen. The enhancement in engine cooling load and subsequent reduction in the brake thermal efficiency with increasing hydrogen fraction is evident from the engine energy balance along with the cumulative heat release profiles. Copyright (C) 2015, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present estimates of single spin asymmetry (SSA) in the electroproduction of taking into account the transverse momentum dependent (TMD) evolution of the gluon Sivers function and using Color Evaporation Model of charmonium production. We estimate SSA for JLab, HERMES, COMPASS and eRHIC energies using recent parameters for the quark Sivers functions which are fitted using an evolution kernel in which the perturbative part is resummed up to next-to-leading logarithms accuracy. We find that these SSAs are much smaller as compared to our first estimates obtained using DGLAP evolution but are comparable to our estimates obtained using TMD evolution where we had used approximate analytical solution of the TMD evolution equation for the purpose.