102 resultados para Dunkl Kernel
Resumo:
A natural class of weighted Bergman spaces on the symmetrized polydisc is isometrically embedded as a subspace in the corresponding weighted Bergman space on the polydisc. We find an orthonormal basis for this subspace. It enables us to compute the kernel function for the weighted Bergman spaces on the symmetrized polydisc using the explicit nature of our embedding. This family of kernel functions includes the Szego and the Bergman kernel on the symmetrized polydisc.
Resumo:
The aim of this paper is to obtain certain characterizations for the image of a Sobolev space on the Heisenberg group under the heat kernel transform. We give three types of characterizations for the image of a Sobolev space of positive order H-m (H-n), m is an element of N-n, under the heat kernel transform on H-n, using direct sum and direct integral of Bergmann spaces and certain unitary representations of H-n which can be realized on the Hilbert space of Hilbert-Schmidt operators on L-2 (R-n). We also show that the image of Sobolev space of negative order H-s (H-n), s(> 0) is an element of R is a direct sum of two weighted Bergman spaces. Finally, we try to obtain some pointwise estimates for the functions in the image of Schwartz class on H-n under the heat kernel transform. (C) 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Resumo:
We consider four-dimensional CFTs which admit a large-N expansion, and whose spectrum contains states whose conformal dimensions do not scale with N. We explicitly reorganise the partition function obtained by exponentiating the one-particle partition function of these states into a heat kernel form for the dual string spectrum on AdS(5). On very general grounds, the heat kernel answer can be expressed in terms of a convolution of the one-particle partition function of the light states in the four-dimensional CFT. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.
Resumo:
An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior.
Resumo:
Regionalization approaches are widely used in water resources engineering to identify hydrologically homogeneous groups of watersheds that are referred to as regions. Pooled information from sites (depicting watersheds) in a region forms the basis to estimate quantiles associated with hydrological extreme events at ungauged/sparsely gauged sites in the region. Conventional regionalization approaches can be effective when watersheds (data points) corresponding to different regions can be separated using straight lines or linear planes in the space of watershed related attributes. In this paper, a kernel-based Fuzzy c-means (KFCM) clustering approach is presented for use in situations where such linear separation of regions cannot be accomplished. The approach uses kernel-based functions to map the data points from the attribute space to a higher-dimensional space where they can be separated into regions by linear planes. A procedure to determine optimal number of regions with the KFCM approach is suggested. Further, formulations to estimate flood quantiles at ungauged sites with the approach are developed. Effectiveness of the approach is demonstrated through Monte-Carlo simulation experiments and a case study on watersheds in United States. Comparison of results with those based on conventional Fuzzy c-means clustering, Region-of-influence approach and a prior study indicate that KFCM approach outperforms the other approaches in forming regions that are closer to being statistically homogeneous and in estimating flood quantiles at ungauged sites. Key Points
Coconut kernel-derived activated carbon as electrode material for electrical double-layer capacitors
Resumo:
Carbonization of milk-free coconut kernel pulp is carried out at low temperatures. The carbon samples are activated using KOH, and electrical double-layer capacitor (EDLC) properties are studied. Among the several samples prepared, activated carbon prepared at 600 A degrees C has a large surface area (1,200 m(2) g(-1)). There is a decrease in surface area with increasing temperature of preparation. Cyclic voltammetry and galvanostatic charge-discharge studies suggest that activated carbons derived from coconut kernel pulp are appropriate materials for EDLC studies in acidic, alkaline, and non-aqueous electrolytes. Specific capacitance of 173 F g(-1) is obtained in 1 M H2SO4 electrolyte for the activated carbon prepared at 600 A degrees C. The supercapacitor properties of activated carbon sample prepared at 600 A degrees C are superior to the samples prepared at higher temperatures.
Resumo:
Support vector machines (SVM) are a popular class of supervised models in machine learning. The associated compute intensive learning algorithm limits their use in real-time applications. This paper presents a fully scalable architecture of a coprocessor, which can compute multiple rows of the kernel matrix in parallel. Further, we propose an extended variant of the popular decomposition technique, sequential minimal optimization, which we call hybrid working set (HWS) algorithm, to effectively utilize the benefits of cached kernel columns and the parallel computational power of the coprocessor. The coprocessor is implemented on Xilinx Virtex 7 field-programmable gate array-based VC707 board and achieves a speedup of upto 25x for kernel computation over single threaded computation on Intel Core i5. An application speedup of upto 15x over software implementation of LIBSVM and speedup of upto 23x over SVMLight is achieved using the HWS algorithm in unison with the coprocessor. The reduction in the number of iterations and sensitivity of the optimization time to variation in cache size using the HWS algorithm are also shown.
Resumo:
Helmke et al. have recently given a formula for the number of reachable pairs of matrices over a finite field. We give a new and elementary proof of the same formula by solving the equivalent problem of determining the number of so called zero kernel pairs over a finite field. We show that the problem is, equivalent to certain other enumeration problems and outline a connection with some recent results of Guo and Yang on the natural density of rectangular unimodular matrices over F-qx]. We also propose a new conjecture on the density of unimodular matrix polynomials. (C) 2016 Elsevier Inc. All rights reserved.
Resumo:
The recently discovered twist phase is studied in the context of the full ten-parameter family of partially coherent general anisotropic Gaussian Schell-model beams. It is shown that the nonnegativity requirement on the cross-spectral density of the beam demands that the strength of the twist phase be bounded from above by the inverse of the transverse coherence area of the beam. The twist phase as a two-point function is shown to have the structure of the generalized Huygens kernel or Green's function of a first-order system. The ray-transfer matrix of this system is exhibited. Wolf-type coherent-mode decomposition of the twist phase is carried out. Imposition of the twist phase on an otherwise untwisted beam is shown to result in a linear transformation in the ray phase space of the Wigner distribution. Though this transformation preserves the four-dimensional phase-space volume, it is not symplectic and hence it can, when impressed on a Wigner distribution, push it out of the convex set of all bona fide Wigner distributions unless the original Wigner distribution was sufficiently deep into the interior of the set.
Resumo:
This study investigates the potential of Relevance Vector Machine (RVM)-based approach to predict the ultimate capacity of laterally loaded pile in clay. RVM is a sparse approximate Bayesian kernel method. It can be seen as a probabilistic version of support vector machine. It provides much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. RVM model outperforms the two other models based on root-mean-square-error (RMSE) and mean-absolute-error (MAE) performance criteria. It also stimates the prediction variance. The results presented in this paper clearly highlight that the RVM is a robust tool for prediction Of ultimate capacity of laterally loaded piles in clay.
Resumo:
Some properties of the eigenvalues of the integral operator Kgt defined as Kτf(x) = ∫0τK(x − y) f (y) dy were studied by [1.], 554–566), with some assumptions on the kernel K(x). In this paper the eigenfunctions of the operator Kτ are shown to be continuous functions of τ under certain circumstances. Also, the results of Vittal Rao and the continuity of eigenfunctions are shown to hold for a larger class of kernels.
Resumo:
The field bean (Dolichos lab lab ; Tamil name, Mochai ; Kanarese, Avarai) is a legume which is widely cultivated in South India often as a mixed crop with cereals. The kernel of the seed enters into the diet of may South Indian households, and in the Mysore State the seed are used as a delicacy when they are green for over four months in the year. The haulm, husk and pods are commonly used a fodder. As the kernel which is widely used as an article of food and considered to be very nutritious, contains about 24% of protein hitherto uninvestigated and as the quality of protein plays an important role in nutrition, the present work was undertaken.
Resumo:
Emerging embedded applications are based on evolving standards (e.g., MPEG2/4, H.264/265, IEEE802.11a/b/g/n). Since most of these applications run on handheld devices, there is an increasing need for a single chip solution that can dynamically interoperate between different standards and their derivatives. In order to achieve high resource utilization and low power dissipation, we propose REDEFINE, a polymorphic ASIC in which specialized hardware units are replaced with basic hardware units that can create the same functionality by runtime re-composition. It is a ``future-proof'' custom hardware solution for multiple applications and their derivatives in a domain. In this article, we describe a compiler framework and supporting hardware comprising compute, storage, and communication resources. Applications described in high-level language (e.g., C) are compiled into application substructures. For each application substructure, a set of compute elements on the hardware are interconnected during runtime to form a pattern that closely matches the communication pattern of that particular application. The advantage is that the bounded CEs are neither processor cores nor logic elements as in FPGAs. Hence, REDEFINE offers the power and performance advantage of an ASIC and the hardware reconfigurability and programmability of that of an FPGA/instruction set processor. In addition, the hardware supports custom instruction pipelining. Existing instruction-set extensible processors determine a sequence of instructions that repeatedly occur within the application to create custom instructions at design time to speed up the execution of this sequence. We extend this scheme further, where a kernel is compiled into custom instructions that bear strong producer-consumer relationship (and not limited to frequently occurring sequences of instructions). Custom instructions, realized as hardware compositions effected at runtime, allow several instances of the same to be active in parallel. A key distinguishing factor in majority of the emerging embedded applications is stream processing. To reduce the overheads of data transfer between custom instructions, direct communication paths are employed among custom instructions. In this article, we present the overview of the hardware-aware compiler framework, which determines the NoC-aware schedule of transports of the data exchanged between the custom instructions on the interconnect. The results for the FFT kernel indicate a 25% reduction in the number of loads/stores, and throughput improves by log(n) for n-point FFT when compared to sequential implementation. Overall, REDEFINE offers flexibility and a runtime reconfigurability at the expense of 1.16x in power and 8x in area when compared to an ASIC. REDEFINE implementation consumes 0.1x the power of an FPGA implementation. In addition, the configuration overhead of the FPGA implementation is 1,000x more than that of REDEFINE.
Resumo:
We derive a very general expression of the survival probability and the first passage time distribution for a particle executing Brownian motion in full phase space with an absorbing boundary condition at a point in the position space, which is valid irrespective of the statistical nature of the dynamics. The expression, together with the Jensen's inequality, naturally leads to a lower bound to the actual survival probability and an approximate first passage time distribution. These are expressed in terms of the position-position, velocity-velocity, and position-velocity variances. Knowledge of these variances enables one to compute a lower bound to the survival probability and consequently the first passage distribution function. As examples, we compute these for a Gaussian Markovian process and, in the case of non-Markovian process, with an exponentially decaying friction kernel and also with a power law friction kernel. Our analysis shows that the survival probability decays exponentially at the long time irrespective of the nature of the dynamics with an exponent equal to the transition state rate constant.