926 resultados para Rough Kernels
Resumo:
We have developed a theory for an electrochemical way of measuring the statistical properties of a nonfractally rough electrode. We obtained the expression for the current transient on a rough electrode which shows three times regions: short and long time limits and the transition region between them. The expressions for these time ranges are exploited to extract morphological information about the surface roughness. In the short and long time regimes, we extract information regarding various morphological features like the roughness factor, average roughness, curvature, correlation length, dimensionality of roughness, and polynomial approximation for the correlation function. The formulas for the surface structure factors (the measure of surface roughness) of rough surfaces in terms of measured reversible and diffusion-limited current transients are also obtained. Finally, we explore the feasibility of making such measurements.
Resumo:
This paper deals with some results (known as Kac-Akhiezer formulae) on generalized Fredholm determinants for Hilbert-Schmidt operators on L2-spaces, available in the literature for convolution kernels on intervals. The Kac-Akhiezer formulae have been obtained for kernels which are not necessarily of convolution nature and for domains in R(n).
Resumo:
We derive boundary conditions at a rigid wall for a granular material comprising rough, inelastic particles. Our analysis is confined to the rapid flow, or granular gas, regime in which grains interact by impulsive collisions. We use the Chapman-Enskog expansion in the kinetic theory of dense gases, extended for inelastic and rough particles, to determine the relevant fluxes to the wall. As in previous studies, we assume that the particles are spheres, and that the wall is corrugated by hemispheres rigidly attached to it. Collisions between the particles and the wall hemispheres are characterized by coefficients of restitution and roughness. We derive boundary conditions for the two limiting cases of nearly smooth and nearly perfectly rough spheres, as a hydrodynamic description of granular gases comprising rough spheres is appropriate only in these limits. The results are illustrated by applying the equations of motion and boundary conditions to the problem of plane Couette flow.
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
By using the method of characteristics, the bearing capacity factor N-gamma was computed for a rough strip footing. The analysis was performed by considering a curved nonplastic wedge under the foundation base bounded by curved slip lines being tangential to the base of the footing at its either edge and inclined at an angle pi/4 - phi/2 with the vertical axis of symmetry. The existing theories in the literature for rough footings, which usually employ a triangular wedge below the footing base, were generally found to provide greater values of N-gamma as compared with the results obtained in this contribution.
Resumo:
In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Support Vector Clustering has gained reasonable attention from the researchers in exploratory data analysis due to firm theoretical foundation in statistical learning theory. Hard Partitioning of the data set achieved by support vector clustering may not be acceptable in real world scenarios. Rough Support Vector Clustering is an extension of Support Vector Clustering to attain a soft partitioning of the data set. But the Quadratic Programming Problem involved in Rough Support Vector Clustering makes it computationally expensive to handle large datasets. In this paper, we propose Rough Core Vector Clustering algorithm which is a computationally efficient realization of Rough Support Vector Clustering. Here Rough Support Vector Clustering problem is formulated using an approximate Minimum Enclosing Ball problem and is solved using an approximate Minimum Enclosing Ball finding algorithm. Experiments done with several Large Multi class datasets such as Forest cover type, and other Multi class datasets taken from LIBSVM page shows that the proposed strategy is efficient, finds meaningful soft cluster abstractions which provide a superior generalization performance than the SVM classifier.
Resumo:
Structural alignments are the most widely used tools for comparing proteins with low sequence similarity. The main contribution of this paper is to derive various kernels on proteins from structural alignments, which do not use sequence information. Central to the kernels is a novel alignment algorithm which matches substructures of fixed size using spectral graph matching techniques. We derive positive semi-definite kernels which capture the notion of similarity between substructures. Using these as base more sophisticated kernels on protein structures are proposed. To empirically evaluate the kernels we used a 40% sequence non-redundant structures from 15 different SCOP superfamilies. The kernels when used with SVMs show competitive performance with CE, a state of the art structure comparison program.
Resumo:
Numerical Linear Algebra (NLA) kernels are at the heart of all computational problems. These kernels require hardware acceleration for increased throughput. NLA Solvers for dense and sparse matrices differ in the way the matrices are stored and operated upon although they exhibit similar computational properties. While ASIC solutions for NLA Solvers can deliver high performance, they are not scalable, and hence are not commercially viable. In this paper, we show how NLA kernels can be accelerated on REDEFINE, a scalable runtime reconfigurable hardware platform. Compared to a software implementation, Direct Solver (Modified Faddeev's algorithm) on REDEFINE shows a 29X improvement on an average and Iterative Solver (Conjugate Gradient algorithm) shows a 15-20% improvement. We further show that solution on REDEFINE is scalable over larger problem sizes without any notable degradation in performance.
Resumo:
Nano-indentation is a technique used to measure various mechanical properties like hardness, Young's modulus and the adherence of thin films and surface layers. It can be used as a quality control tool for various surface modification techniques like ion-implantation, film deposition processes etc. It is important to characterise the increasing scatter in the data measured at lower penetration depths observed in the nano-indentation, for the technique to be effectively applied. Surface roughness is one of the parameters contributing for the scatter. This paper is aimed at quantifying the nature and the amount of scatter that will be introduced in the measurement due to the roughness of the surface on which the indentation is carried out. For this the surface is simulated using the Weierstrass-Mandelbrot function which gives a self-affine fractal. The contact area of this surface with a conical indenter with a spherical cap at the tip is measured numerically. The indentation process is simulated using the spherical cavity model. This eliminates the indentation size effect observed at the micron and sub-micron scales. It has been observed that there exists a definite penetration depth in relation to the surface roughness beyond which the scatter is reduced such that reliable data could be obtained.
Resumo:
In this paper, we study the asymptotic behavior of an optimal control problem for the time-dependent Kirchhoff-Love plate whose middle surface has a very rough boundary. We identify the limit problem which is an optimal control problem for the limit equation with a different cost functional.
Resumo:
Each new generation of GPUs vastly increases the resources available to GPGPU programs. GPU programming models (like CUDA) were designed to scale to use these resources. However, we find that CUDA programs actually do not scale to utilize all available resources, with over 30% of resources going unused on average for programs of the Parboil2 suite that we used in our work. Current GPUs therefore allow concurrent execution of kernels to improve utilization. In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we find concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit fine-grained control over their resource usage. We then propose several elastic-kernel aware concurrency policies that offer significantly better performance and concurrency compared to the current CUDA policy. We evaluate our proposals on real hardware using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite. On average, our proposals increase system throughput (STP) by 1.21x and improve the average normalized turnaround time (ANTT) by 3.73x for two-program workloads when compared to the current CUDA concurrency implementation.
Resumo:
We present in this paper, approximate analytical expressions for the intensity of light scattered by a rough surface, whose elevation. xi(x,y) in the z-direction is a zero mean stationary Gaussian random variable. With (x,y) and (x',y') being two points on the surface, we have h.
Resumo:
Rapid advancements in multi-core processor architectures coupled with low-cost, low-latency, high-bandwidth interconnects have made clusters of multi-core machines a common computing resource. Unfortunately, writing good parallel programs that efficiently utilize all the resources in such a cluster is still a major challenge. Various programming languages have been proposed as a solution to this problem, but are yet to be adopted widely to run performance-critical code mainly due to the relatively immature software framework and the effort involved in re-writing existing code in the new language. In this paper, we motivate and describe our initial study in exploring CUDA as a programming language for a cluster of multi-cores. We develop CUDA-For-Clusters (CFC), a framework that transparently orchestrates execution of CUDA kernels on a cluster of multi-core machines. The well-structured nature of a CUDA kernel, the growing popularity, support and stability of the CUDA software stack collectively make CUDA a good candidate to be considered as a programming language for a cluster. CFC uses a mixture of source-to-source compiler transformations, a work distribution runtime and a light-weight software distributed shared memory to manage parallel executions. Initial results on running several standard CUDA benchmark programs achieve impressive speedups of up to 7.5X on a cluster with 8 nodes, thereby opening up an interesting direction of research for further investigation.