102 resultados para T-Kernel
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.
Resumo:
With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.
Resumo:
The ultrasonic degradation of poly(vinyl acetate) was carried out in six different solvents and two mixtures of solvents. The evolution of molecular weight distribution (MWD) with time was determined with gel permeation chromatography. The observed MWDs were analyzed by continuous distribution kinetics. A stoichiometric kernel that accounts for preferential mid-point breakage of the polymer chains was used. The degradation rate coefficient of the polymer in each solvent was determined from the model. The variations of rate coefficients were correlated with vapor pressure of the solvent, the Flory–Huggins polymer–solvent interaction parameter and the kinematic viscosity of the solution. A lower saturation vapor pressure resulted in higher degradation rates of the polymer. The degradation rate increased with increasing kinematic viscosity.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.
Optimised form of acceleration correction algorithm within SPH-based simulations of impact mechanics
Resumo:
In the context of SPH-based simulations of impact dynamics, an optimised and automated form of the acceleration correction algorithm (Shaw and Reid, 2009a) is developed so as to remove spurious high frequency oscillations in computed responses whilst retaining the stabilizing characteristics of the artificial viscosity in the presence of shocks and layers with sharp gradients. A rational framework for an insightful characterisation of the erstwhile acceleration correction method is first set up. This is followed by the proposal of an optimised version of the method, wherein the strength of the correction term in the momentum balance and energy equations is optimised. For the first time, this leads to an automated procedure to arrive at the artificial viscosity term. In particular, this is achieved by taking a spatially varying response-dependent support size for the kernel function through which the correction term is computed. The optimum value of the support size is deduced by minimising the (spatially localised) total variation of the high oscillation in the acceleration term with respect to its (local) mean. The derivation of the method, its advantages over the heuristic method and issues related to its numerical implementation are discussed in detail. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Long-distance dispersal (LDD) events, although rare for most plant species, can strongly influence population and community dynamics. Animals function as a key biotic vector of seeds and thus, a mechanistic and quantitative understanding of how individual animal behaviors scale to dispersal patterns at different spatial scales is a question of critical importance from both basic and applied perspectives. Using a diffusion-theory based analytical approach for a wide range of animal movement and seed transportation patterns, we show that the scale (a measure of local dispersal) of the seed dispersal kernel increases with the organisms' rate of movement and mean seed retention time. We reveal that variations in seed retention time is a key determinant of various measures of LDD such as kurtosis (or shape) of the kernel, thinkness of tails and the absolute number of seeds falling beyond a threshold distance. Using empirical data sets of frugivores, we illustrate the importance of variability in retention times for predicting the key disperser species that influence LDD. Our study makes testable predictions linking animal movement behaviors and gut retention times to dispersal patterns and, more generally, highlights the potential importance of animal behavioral variability for the LDD of seeds.
Resumo:
The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S)(z, w) = ( 1 - z(w)over bar)- 1 for |z|, |w| < 1, by means of (1/k(S))( T, T *) = 0, we consider an arbitrary open connected domain Omega in C(n), a kernel k on Omega so that 1/k is a polynomial and a tuple T = (T(1), T(2), ... , T(n)) of commuting bounded operators on a complex separable Hilbert spaceHsuch that (1/k)( T, T *) >= 0. Under some standard assumptions on k, it turns out that whether a characteristic function can be associated with T or not depends not only on T, but also on the kernel k. We give a necessary and sufficient condition. When this condition is satisfied, a functional model can be constructed. Moreover, the characteristic function then is a complete unitary invariant for a suitable class of tuples T.
Resumo:
In this note, we show that a quasi-free Hilbert module R defined over the polydisk algebra with kernel function k(z,w) admits a unique minimal dilation (actually an isometric co-extension) to the Hardy module over the polydisk if and only if S (-1)(z, w)k(z, w) is a positive kernel function, where S(z,w) is the Szego kernel for the polydisk. Moreover, we establish the equivalence of such a factorization of the kernel function and a positivity condition, defined using the hereditary functional calculus, which was introduced earlier by Athavale [8] and Ambrozie, Englis and Muller [2]. An explicit realization of the dilation space is given along with the isometric embedding of the module R in it. The proof works for a wider class of Hilbert modules in which the Hardy module is replaced by more general quasi-free Hilbert modules such as the classical spaces on the polydisk or the unit ball in a'', (m) . Some consequences of this more general result are then explored in the case of several natural function algebras.
Resumo:
For a contraction P and a bounded commutant S of P. we seek a solution X of the operator equation S - S*P = (1 - P* P)(1/2) X (1 - P* P)(1/2) where X is a bounded operator on (Ran) over bar (1 - P* P)(1/2) with numerical radius of X being not greater than 1. A pair of bounded operators (S, P) which has the domain Gamma = {(z(1) + z(2), z(2)): vertical bar z(1)vertical bar < 1, vertical bar z(2)vertical bar <= 1} subset of C-2 as a spectral set, is called a P-contraction in the literature. We show the existence and uniqueness of solution to the operator equation above for a Gamma-contraction (S, P). This allows us to construct an explicit Gamma-isometric dilation of a Gamma-contraction (S, P). We prove the other way too, i.e., for a commuting pair (S, P) with parallel to P parallel to <= 1 and the spectral radius of S being not greater than 2, the existence of a solution to the above equation implies that (S, P) is a Gamma-contraction. We show that for a pure F-contraction (S, P), there is a bounded operator C with numerical radius not greater than 1, such that S = C + C* P. Any Gamma-isometry can be written in this form where P now is an isometry commuting with C and C. Any Gamma-unitary is of this form as well with P and C being commuting unitaries. Examples of Gamma-contractions on reproducing kernel Hilbert spaces and their Gamma-isometric dilations are discussed. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
We solve the wave equations of arbitrary integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. We show that these quasinormal modes precisely agree with the location of the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We then use these quasinormal modes to construct the one-loop determinant of the higher spin field in the thermal BTZ background. This is shown to agree with that obtained from the corresponding heat kernel constructed recently by group theoretic methods.
Resumo:
Hilbert C*-module valued coherent states was introduced earlier by Ali, Bhattacharyya and Shyam Roy. We consider the case when the underlying C*-algebra is a W*-algebra. The construction is similar with a substantial gain. The associated reproducing kernel is now algebra valued, rather than taking values in the space of bounded linear operators between two C*-algebras.
Resumo:
Recently it has been shown that the wave equations of bosonic higher spin fields in the BTZ background can be solved exactly. In this work we extend this analysis to fermionic higher spin fields. We solve the wave equations for arbitrary half-integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. These quasinormal modes are shown to agree precisely with the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We also obtain an expression for the 1-loop determinant for the Euclidean non-rotating BTZ black hole in terms of the quasinormal modes which agrees with that obtained by integrating the heat kernel found by group theoretic methods.
Resumo:
We consider the speech production mechanism and the asso- ciated linear source-filter model. For voiced speech sounds in particular, the source/glottal excitation is modeled as a stream of impulses and the filter as a cascade of second-order resonators. We show that the process of sampling speech signals can be modeled as filtering a stream of Dirac impulses (a model for the excitation) with a kernel function (the vocal tract response),and then sampling uniformly. We show that the problem of esti- mating the excitation is equivalent to the problem of recovering a stream of Dirac impulses from samples of a filtered version. We present associated algorithms based on the annihilating filter and also make a comparison with the classical linear prediction technique, which is well known in speech analysis. Results on synthesized as well as natural speech data are presented.