123 resultados para kernel estimators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fujikawa's method of evaluating the anomalies is extended to the on-shell supersymmetric (SUSY) theories. The supercurrent and the superconformal current anomalies are evaluated for the Wess-Zumino model using the background-field formulation and heat-kernel regularization. We find that the regularized Jacobians for SUSY and superconformal transformations are finite. The results can be expressed in a form such that there is no supercurrent anomaly but a finite nonzero superconformal anomaly, in agreement with similar results obtained using other methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S) (z, w) = (1 - z (w) over tilde)(-1) for |z|, |w| < 1, by means of (1/k(S))(T,T*) >= 0, we consider an arbitrary open connected domain Omega in C-n, a complete Pick kernel k on Omega and a tuple T = (T-1, ..., T-n) of commuting bounded operators on a complex separable Hilbert space H such that (1/k)(T,T*) >= 0. For a complete Pick kernel the 1/k functional calculus makes sense in a beautiful way. It turns out that the model theory works very well and a characteristic function can be associated with T. Moreover, the characteristic function is then a complete unitary invariant for a suitable class of tuples T.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using a modified Green's function technique the two well-known basic problems of scattering of surface water waves by vertical barriers are reduced to the problem of solving a pair of uncoupled integral equations involving the “jump” and “sum” of the limiting values of the velocity potential on the two sides of the barriers in each case. These integral equations are then solved, in closed form, by the aid of an integral transform technique involving a general trigonometric kernel as applicable to the problems associated with a radiation condition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several recent theoretical and computer simulation studies have considered solvation dynamics in a Brownian dipolar lattice which provides a simple model solvent for which detailed calculations can be carried out. In this article a fully microscopic calculation of the solvation dynamics of an ion in a Brownian dipolar lattice is presented. The calculation is based on the non‐Markovian molecular hydrodynamic theory developed recently. The main assumption of the present calculation is that the two‐particle orientational correlation functions of the solid can be replaced by those of the liquid state. It is shown that such a calculation provides an excellent agreement with the computer simulation results. More importantly, the present calculations clearly demonstrate that the frequency‐dependent dielectric friction plays an important role in the long time decay of the solvation time correlation function. We also find that the present calculation provides somewhat better agreement than either the dynamic mean spherical approximation (DMSA) or the Fried–Mukamel theory which use the simulated frequency‐dependent dielectric function. It is found that the dissipative kernels used in the molecular hydrodynamic approach and in the Fried–Mukamel theory are vastly different, especially at short times. However, in spite of this disagreement, the two theories still lead to comparable results in good agreement with computer simulation, which suggests that even a semiquantitatively accurate dissipative kernel may be sufficient to obtain a reliable solvation time correlation function. A new wave vector and frequency‐dependent dissipative kernel (or memory function) is proposed which correctly goes over to the appropriate expressions in both the single particle and the collective limits. This form is expected to lead to better results than all the existing descriptions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A theoretical analysis of the three currently popular microscopic theories of solvation dynamics, namely, the dynamic mean spherical approximation (DMSA), the molecular hydrodynamic theory (MHT), and the memory function theory (MFT) is carried out. It is shown that in the underdamped limit of momentum relaxation, all three theories lead to nearly identical results when the translational motions of both the solute ion and the solvent molecules are neglected. In this limit, the theoretical prediction is in almost perfect agreement with the computer simulation results of solvation dynamics in the model Stockmayer liquid. However, the situation changes significantly in the presence of the translational motion of the solvent molecules. In this case, DMSA breaks down but the other two theories correctly predict the acceleration of solvation in agreement with the simulation results. We find that the translational motion of a light solute ion can play an important role in its own solvation. None of the existing theories describe this aspect. A generalization of the extended hydrodynamic theory is presented which, for the first time, includes the contribution of solute motion towards its own solvation dynamics. The extended theory gives excellent agreement with the simulations where solute motion is allowed. It is further shown that in the absence of translation, the memory function theory of Fried and Mukamel can be recovered from the hydrodynamic equations if the wave vector dependent dissipative kernel in the hydrodynamic description is replaced by its long wavelength value. We suggest a convenient memory kernel which is superior to the limiting forms used in earlier descriptions. We also present an alternate, quite general, statistical mechanical expression for the time dependent solvation energy of an ion. This expression has remarkable similarity with that for the translational dielectric friction on a moving ion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present robust semi-blind (SB) algorithms for the estimation of beamforming vectors for multiple-input multiple-output wireless communication. The transmitted symbol block is assumed to comprise of a known sequence of training (pilot) symbols followed by information bearing blind (unknown) data symbols. Analytical expressions are derived for the robust SB estimators of the MIMO receive and transmit beamforming vectors. These robust SB estimators employ a preliminary estimate obtained from the pilot symbol sequence and leverage the second-order statistical information from the blind data symbols. We employ the theory of Lagrangian duality to derive the robust estimate of the receive beamforming vector by maximizing an inner product, while constraining the channel estimate to lie in a confidence sphere centered at the initial pilot estimate. Two different schemes are then proposed for computing the robust estimate of the MIMO transmit beamforming vector. Simulation results presented in the end illustrate the superior performance of the robust SB estimators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of estimating the time-dependent statistical characteristics of a random dynamical system is studied under two different settings. In the first, the system dynamics is governed by a differential equation parameterized by a random parameter, while in the second, this is governed by a differential equation with an underlying parameter sequence characterized by a continuous time Markov chain. We propose, for the first time in the literature, stochastic approximation algorithms for estimating various time-dependent process characteristics of the system. In particular, we provide efficient estimators for quantities such as the mean, variance and distribution of the process at any given time as well as the joint distribution and the autocorrelation coefficient at different times. A novel aspect of our approach is that we assume that information on the parameter model (i.e., its distribution in the first case and transition probabilities of the Markov chain in the second) is not available in either case. This is unlike most other work in the literature that assumes availability of such information. Also, most of the prior work in the literature is geared towards analyzing the steady-state system behavior of the random dynamical system while our focus is on analyzing the time-dependent statistical characteristics which are in general difficult to obtain. We prove the almost sure convergence of our stochastic approximation scheme in each case to the true value of the quantity being estimated. We provide a general class of strongly consistent estimators for the aforementioned statistical quantities with regular sample average estimators being a specific instance of these. We also present an application of the proposed scheme on a widely used model in population biology. Numerical experiments in this framework show that the time-dependent process characteristics as obtained using our algorithm in each case exhibit excellent agreement with exact results. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We generalize the concept of coherent states, traditionally defined as special families of vectors on Hilbert spaces, to Hilbert modules. We show that Hilbert modules over C*-algebras are the natural settings for a generalization of coherent states defined on Hilbert spaces. We consider those Hilbert C*-modules which have a natural left action from another C*-algebra, say A. The coherent states are well defined in this case and they behave well with respect to the left action by A. Certain classical objects like the Cuntz algebra are related to specific examples of coherent states. Finally we show that coherent states on modules give rise to a completely positive definite kernel between two C*-algebras, in complete analogy to the Hilbert space situation. Related to this, there is a dilation result for positive operator-valued measures, in the sense of Naimark. A number of examples are worked out to illustrate the theory. Some possible physical applications are also mentioned.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ultrasonic degradation of poly(vinyl acetate) was carried out in six different solvents and two mixtures of solvents. The evolution of molecular weight distribution (MWD) with time was determined with gel permeation chromatography. The observed MWDs were analyzed by continuous distribution kinetics. A stoichiometric kernel that accounts for preferential mid-point breakage of the polymer chains was used. The degradation rate coefficient of the polymer in each solvent was determined from the model. The variations of rate coefficients were correlated with vapor pressure of the solvent, the Flory–Huggins polymer–solvent interaction parameter and the kinematic viscosity of the solution. A lower saturation vapor pressure resulted in higher degradation rates of the polymer. The degradation rate increased with increasing kinematic viscosity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A "plan diagram" is a pictorial enumeration of the execution plan choices of a database query optimizer over the relational selectivity space. We have shown recently that, for industrial-strength database engines, these diagrams are often remarkably complex and dense, with a large number of plans covering the space. However, they can often be reduced to much simpler pictures, featuring significantly fewer plans, without materially affecting the query processing quality. Plan reduction has useful implications for the design and usage of query optimizers, including quantifying redundancy in the plan search space, enhancing useability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overheads of multi-plan approaches. We investigate here the plan reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan reduction, w.r.t. minimizing the number of plans, is an NP-hard problem in general, and remains so even for a storage-constrained variant. We then present a greedy reduction algorithm with tight and optimal performance guarantees, whose complexity scales linearly with the number of plans in the diagram for a given resolution. Next, we devise fast estimators for locating the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Finally, extensive experimentation with a suite of multi-dimensional TPCH-based query templates on industrial-strength optimizers demonstrates that complex plan diagrams easily reduce to "anorexic" (small absolute number of plans) levels incurring only marginal increases in the estimated query processing costs.