985 resultados para Kernel function


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A proposta deste artigo foi estudar a evolução do tamanho das cidades dos estados do nordeste do Brasil para os anos de 1990, 2000 e 2010 através da regularidade empírica conhecida como lei de Zipf, a qual pode ser representada por meio da distribuição de Pareto. Por meio da análise na dinâmica da distribuição das populações através do tempo, o crescimento urbano revelou uma persistência hierárquica das cidades de Salvador, Fortaleza e Recife, enquanto que São Luís experimentou o quarto lugar no rankinging das maiores cidades, que persistiu nas duas últimas décadas. A lei de Zipf não se verificou quando se considerou as cidades do Nordeste em conjunto, que pode ser devido ao menor grau de desenvolvimento urbano das cidades dessa região. Na análise dos estados em separado, também não se observou a lei de Zipf, embora tenha se verificado a lei de Gibrat, a qual postula que o crescimento das cidades é independente de seu tamanho. Por fim, acredita-se que a instalação do complexo minerometalúrgico do Maranhão tenha contribuído para o desenvolvimento e para a redução da desigualdade urbana intracidade nesta área.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we study an Hammerstein generalized integral equation u(t)=∫_{-∞}^{+∞}k(t,s) f(s,u(s),u′(s),...,u^{(m)}(s))ds, where k:ℝ²→ℝ is a W^{m,∞}(ℝ²), m∈ℕ, kernel function and f:ℝ^{m+2}→ℝ is a L¹-Carathéodory function. To the best of our knowledge, this paper is the first one to consider discontinuous nonlinearities with derivatives dependence, without monotone or asymptotic assumptions, on the whole real line. Our method is applied to a fourth order nonlinear boundary value problem, which models moderately large deflections of infinite nonlinear beams resting on elastic foundations under localized external loads.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resolving a noted open problem, we show that the Undirected Feedback Vertex Set problem, parameterized by the size of the solution set of vertices, is in the parameterized complexity class Poly(k), that is, polynomial-time pre-processing is sufficient to reduce an initial problem instance (G, k) to a decision-equivalent simplified instance (G', k') where k' � k, and the number of vertices of G' is bounded by a polynomial function of k. Our main result shows an O(k11) kernelization bound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Error estimates for the error reproducing kernel method (ERKM) are provided. The ERKM is a mesh-free functional approximation scheme [A. Shaw, D. Roy, A NURBS-based error reproducing kernel method with applications in solid mechanics, Computational Mechanics (2006), to appear (available online)], wherein a targeted function and its derivatives are first approximated via non-uniform rational B-splines (NURBS) basis function. Errors in the NURBS approximation are then reproduced via a family of non-NURBS basis functions, constructed using a polynomial reproduction condition, and added to the NURBS approximation of the function obtained in the first step. In addition to the derivation of error estimates, convergence studies are undertaken for a couple of test boundary value problems with known exact solutions. The ERKM is next applied to a one-dimensional Burgers equation where, time evolution leads to a breakdown of the continuous solution and the appearance of a shock. Many available mesh-free schemes appear to be unable to capture this shock without numerical instability. However, given that any desired order of continuity is achievable through NURBS approximations, the ERKM can even accurately approximate functions with discontinuous derivatives. Moreover, due to the variation diminishing property of NURBS, it has advantages in representing sharp changes in gradients. This paper is focused on demonstrating this ability of ERKM via some numerical examples. Comparisons of some of the results with those via the standard form of the reproducing kernel particle method (RKPM) demonstrate the relative numerical advantages and accuracy of the ERKM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fluctuation of the distance between a fluorescein-tyrosine pair within a single protein complex was directly monitored in real time by photoinduced electron transfer and found to be a stationary, time-reversible, and non-Markovian Gaussian process. Within the generalized Langevin equation formalism, we experimentally determine the memory kernel K(t), which is proportional to the autocorrelation function of the random fluctuating force. K(t) is a power-law decay, t(-0.51 +/- 0.07) in a broad range of time scales (10(-3)-10 s). Such a long-time memory effect could have implications for protein functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We derive the heat kernel for arbitrary tensor fields on S-3 and (Euclidean) AdS(3) using a group theoretic approach. We use these results to also obtain the heat kernel on certain quotients of these spaces. In particular, we give a simple, explicit expression for the one loop determinant for a field of arbitrary spin s in thermal AdS(3). We apply this to the calculation of the one loop partition function of N = 1 supergravity on AdS(3). We find that the answer factorizes into left- and right-moving super Virasoro characters built on the SL(2, C) invariant vacuum, as argued by Maloney and Witten on general grounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The characteristic function for a contraction is a classical complete unitary invariant devised by Sz.-Nagy and Foias. Just as a contraction is related to the Szego kernel k(S)(z, w) = ( 1 - z(w)over bar)- 1 for |z|, |w| < 1, by means of (1/k(S))( T, T *) = 0, we consider an arbitrary open connected domain Omega in C(n), a kernel k on Omega so that 1/k is a polynomial and a tuple T = (T(1), T(2), ... , T(n)) of commuting bounded operators on a complex separable Hilbert spaceHsuch that (1/k)( T, T *) >= 0. Under some standard assumptions on k, it turns out that whether a characteristic function can be associated with T or not depends not only on T, but also on the kernel k. We give a necessary and sufficient condition. When this condition is satisfied, a functional model can be constructed. Moreover, the characteristic function then is a complete unitary invariant for a suitable class of tuples T.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lovasz θ function of a graph, is a fundamental tool in combinatorial optimization and approximation algorithms. Computing θ involves solving a SDP and is extremely expensive even for moderately sized graphs. In this paper we establish that the Lovasz θ function is equivalent to a kernel learning problem related to one class SVM. This interesting connection opens up many opportunities bridging graph theoretic algorithms and machine learning. We show that there exist graphs, which we call SVM−θ graphs, on which the Lovasz θ function can be approximated well by a one-class SVM. This leads to a novel use of SVM techniques to solve algorithmic problems in large graphs e.g. identifying a planted clique of size Θ(n√) in a random graph G(n,12). A classic approach for this problem involves computing the θ function, however it is not scalable due to SDP computation. We show that the random graph with a planted clique is an example of SVM−θ graph, and as a consequence a SVM based approach easily identifies the clique in large graphs and is competitive with the state-of-the-art. Further, we introduce the notion of a ''common orthogonal labeling'' which extends the notion of a ''orthogonal labelling of a single graph (used in defining the θ function) to multiple graphs. The problem of finding the optimal common orthogonal labelling is cast as a Multiple Kernel Learning problem and is used to identify a large common dense region in multiple graphs. The proposed algorithm achieves an order of magnitude scalability compared to the state of the art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider four-dimensional CFTs which admit a large-N expansion, and whose spectrum contains states whose conformal dimensions do not scale with N. We explicitly reorganise the partition function obtained by exponentiating the one-particle partition function of these states into a heat kernel form for the dual string spectrum on AdS(5). On very general grounds, the heat kernel answer can be expressed in terms of a convolution of the one-particle partition function of the light states in the four-dimensional CFT. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.