884 resultados para Dunkl Kernel
Resumo:
The ultrasonic degradation of poly(vinyl acetate) was carried out in six different solvents and two mixtures of solvents. The evolution of molecular weight distribution (MWD) with time was determined with gel permeation chromatography. The observed MWDs were analyzed by continuous distribution kinetics. A stoichiometric kernel that accounts for preferential mid-point breakage of the polymer chains was used. The degradation rate coefficient of the polymer in each solvent was determined from the model. The variations of rate coefficients were correlated with vapor pressure of the solvent, the Flory–Huggins polymer–solvent interaction parameter and the kinematic viscosity of the solution. A lower saturation vapor pressure resulted in higher degradation rates of the polymer. The degradation rate increased with increasing kinematic viscosity.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Long-distance dispersal (LDD) events, although rare for most plant species, can strongly influence population and community dynamics. Animals function as a key biotic vector of seeds and thus, a mechanistic and quantitative understanding of how individual animal behaviors scale to dispersal patterns at different spatial scales is a question of critical importance from both basic and applied perspectives. Using a diffusion-theory based analytical approach for a wide range of animal movement and seed transportation patterns, we show that the scale (a measure of local dispersal) of the seed dispersal kernel increases with the organisms' rate of movement and mean seed retention time. We reveal that variations in seed retention time is a key determinant of various measures of LDD such as kurtosis (or shape) of the kernel, thinkness of tails and the absolute number of seeds falling beyond a threshold distance. Using empirical data sets of frugivores, we illustrate the importance of variability in retention times for predicting the key disperser species that influence LDD. Our study makes testable predictions linking animal movement behaviors and gut retention times to dispersal patterns and, more generally, highlights the potential importance of animal behavioral variability for the LDD of seeds.
Resumo:
The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S)(z, w) = ( 1 - z(w)over bar)- 1 for |z|, |w| < 1, by means of (1/k(S))( T, T *) = 0, we consider an arbitrary open connected domain Omega in C(n), a kernel k on Omega so that 1/k is a polynomial and a tuple T = (T(1), T(2), ... , T(n)) of commuting bounded operators on a complex separable Hilbert spaceHsuch that (1/k)( T, T *) >= 0. Under some standard assumptions on k, it turns out that whether a characteristic function can be associated with T or not depends not only on T, but also on the kernel k. We give a necessary and sufficient condition. When this condition is satisfied, a functional model can be constructed. Moreover, the characteristic function then is a complete unitary invariant for a suitable class of tuples T.
Resumo:
In this note, we show that a quasi-free Hilbert module R defined over the polydisk algebra with kernel function k(z,w) admits a unique minimal dilation (actually an isometric co-extension) to the Hardy module over the polydisk if and only if S (-1)(z, w)k(z, w) is a positive kernel function, where S(z,w) is the Szego kernel for the polydisk. Moreover, we establish the equivalence of such a factorization of the kernel function and a positivity condition, defined using the hereditary functional calculus, which was introduced earlier by Athavale [8] and Ambrozie, Englis and Muller [2]. An explicit realization of the dilation space is given along with the isometric embedding of the module R in it. The proof works for a wider class of Hilbert modules in which the Hardy module is replaced by more general quasi-free Hilbert modules such as the classical spaces on the polydisk or the unit ball in a'', (m) . Some consequences of this more general result are then explored in the case of several natural function algebras.
Resumo:
For a contraction P and a bounded commutant S of P. we seek a solution X of the operator equation S - S*P = (1 - P* P)(1/2) X (1 - P* P)(1/2) where X is a bounded operator on (Ran) over bar (1 - P* P)(1/2) with numerical radius of X being not greater than 1. A pair of bounded operators (S, P) which has the domain Gamma = {(z(1) + z(2), z(2)): vertical bar z(1)vertical bar < 1, vertical bar z(2)vertical bar <= 1} subset of C-2 as a spectral set, is called a P-contraction in the literature. We show the existence and uniqueness of solution to the operator equation above for a Gamma-contraction (S, P). This allows us to construct an explicit Gamma-isometric dilation of a Gamma-contraction (S, P). We prove the other way too, i.e., for a commuting pair (S, P) with parallel to P parallel to <= 1 and the spectral radius of S being not greater than 2, the existence of a solution to the above equation implies that (S, P) is a Gamma-contraction. We show that for a pure F-contraction (S, P), there is a bounded operator C with numerical radius not greater than 1, such that S = C + C* P. Any Gamma-isometry can be written in this form where P now is an isometry commuting with C and C. Any Gamma-unitary is of this form as well with P and C being commuting unitaries. Examples of Gamma-contractions on reproducing kernel Hilbert spaces and their Gamma-isometric dilations are discussed. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
We solve the wave equations of arbitrary integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. We show that these quasinormal modes precisely agree with the location of the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We then use these quasinormal modes to construct the one-loop determinant of the higher spin field in the thermal BTZ background. This is shown to agree with that obtained from the corresponding heat kernel constructed recently by group theoretic methods.
Resumo:
Hilbert C*-module valued coherent states was introduced earlier by Ali, Bhattacharyya and Shyam Roy. We consider the case when the underlying C*-algebra is a W*-algebra. The construction is similar with a substantial gain. The associated reproducing kernel is now algebra valued, rather than taking values in the space of bounded linear operators between two C*-algebras.
Resumo:
Recently it has been shown that the wave equations of bosonic higher spin fields in the BTZ background can be solved exactly. In this work we extend this analysis to fermionic higher spin fields. We solve the wave equations for arbitrary half-integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. These quasinormal modes are shown to agree precisely with the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We also obtain an expression for the 1-loop determinant for the Euclidean non-rotating BTZ black hole in terms of the quasinormal modes which agrees with that obtained by integrating the heat kernel found by group theoretic methods.
Resumo:
We consider the speech production mechanism and the asso- ciated linear source-filter model. For voiced speech sounds in particular, the source/glottal excitation is modeled as a stream of impulses and the filter as a cascade of second-order resonators. We show that the process of sampling speech signals can be modeled as filtering a stream of Dirac impulses (a model for the excitation) with a kernel function (the vocal tract response),and then sampling uniformly. We show that the problem of esti- mating the excitation is equivalent to the problem of recovering a stream of Dirac impulses from samples of a filtered version. We present associated algorithms based on the annihilating filter and also make a comparison with the classical linear prediction technique, which is well known in speech analysis. Results on synthesized as well as natural speech data are presented.
Resumo:
Signal acquisition under a compressed sensing scheme offers the possibility of acquisition and reconstruction of signals sparse on some basis incoherent with measurement kernel with sub-Nyquist number of measurements. In particular when the sole objective of the acquisition is the detection of the frequency of a signal rather than exact reconstruction, then an undersampling framework like CS is able to perform the task. In this paper we explore the possibility of acquisition and detection of frequency of multiple analog signals, heavily corrupted with additive white Gaussian noise. We improvise upon the MOSAICS architecture proposed by us in our previous work to include a wider class of signals having non-integral frequency components. This makes it possible to perform multiplexed compressed sensing for general frequency sparse signals.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
The paper addresses experiments and modeling studies on the use of producer gas, a bio-derived low energy content fuel in a spark-ignited engine. Producer gas, generated in situ, has thermo-physical properties different from those of fossil fuel(s). Experiments on naturally aspirated and turbo-charged engine operation and subsequent analysis of the cylinder pressure traces reveal significant differences in the heat release pattern within the cylinder compared with a typical fossil fuel. The heat release patterns for gasoline and producer gas compare well in the initial 50% but beyond this, producer gas combustion tends to be sluggish leading to an overall increase in the combustion duration. This is rather unexpected considering that producer gas with nearly 20% hydrogen has higher flame speeds than gasoline. The influence of hydrogen on the initial flame kernel development period and the combustion duration and hence on the overall heat release pattern is addressed. The significant deviations in the heat release profiles between conventional fuels and producer gas necessitates the estimation of producer gas-specific Wiebe coefficients. The experimental heat release profiles are used for estimating the Wiebe coefficients. Experimental evidence of lower fuel conversion efficiency based on the chemical and thermal analysis of the engine exhaust gas is used to arrive at the Wiebe coefficients. The efficiency factor a is found to be 2.4 while the shape factor m is estimated at 0.7 for 2% to 90% burn duration. The standard Wiebe coefficients for conventional fuels and fuel-specific coefficients for producer gas are used in a zero D model to predict the performance of a 6-cylinder gas engine under naturally aspirated and turbo-charged conditions. While simulation results with standard Wiebe coefficients result in excessive deviations from the experimental results, excellent match is observed when producer gas-specific coefficients are used. Predictions using the same coefficients on a 3-cylinder gas engine having different geometry and compression ratio(s) indicate close match with the experimental traces highlighting the versatility of the coefficients.