960 resultados para Kernel density estimates
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.
Resumo:
BACKGROUND - High-density lipoprotein (HDL) protects against arterial atherothrombosis, but it is unknown whether it protects against recurrent venous thromboembolism. METHODS AND RESULTS - We studied 772 patients after a first spontaneous venous thromboembolism (average follow-up 48 months) and recorded the end point of symptomatic recurrent venous thromboembolism, which developed in 100 of the 772 patients. The relationship between plasma lipoprotein parameters and recurrence was evaluated. Plasma apolipoproteins AI and B were measured by immunoassays for all subjects. Compared with those without recurrence, patients with recurrence had lower mean (±SD) levels of apolipoprotein AI (1.12±0.22 versus 1.23±0.27 mg/mL, P<0.001) but similar apolipoprotein B levels. The relative risk of recurrence was 0.87 (95% CI, 0.80 to 0.94) for each increase of 0.1 mg/mL in plasma apolipoprotein AI. Compared with patients with apolipoprotein AI levels in the lowest tertile (<1.07 mg/mL), the relative risk of recurrence was 0.46 (95% CI, 0.27 to 0.77) for the highest-tertile patients (apolipoprotein AI >1.30 mg/mL) and 0.78 (95% CI, 0.50 to 1.22) for midtertile patients (apolipoprotein AI of 1.07 to 1.30 mg/mL). Using nuclear magnetic resonance, we determined the levels of 10 major lipoprotein subclasses and HDL cholesterol for 71 patients with recurrence and 142 matched patients without recurrence. We found a strong trend for association between recurrence and low levels of HDL particles and HDL cholesterol. CONCLUSIONS - Patients with high levels of apolipoprotein AI and HDL have a decreased risk of recurrent venous thromboembolism. © 2007 American Heart Association, Inc.
Resumo:
Background-Although dyslipoproteinemia is associated with arterial atherothrombosis, little is known about plasma lipoproteins in venous thrombosis patients. Methods and Results-We determined plasma lipoprotein subclass concentrations using nuclear magnetic resonance spectroscopy and antigenic levels of apolipoproteins AI and B in blood samples from 49 male venous thrombosis patients and matched controls aged <55 years. Venous thrombosis patients had significantly lower levels of HDL particles, large HDL particles, HDL cholesterol, and apolipoprotein AI and significantly higher levels of LDL particles and small LDL particles. The quartile-based odds ratios for decreased HDL particle and apolipoprotein AI levels in patients compared with controls were 6.5 and 6.0 (95% CI, 2.3 to 19 and 2.1 to 17), respectively. Odds ratios for apolipoprotein B/apolipoprotein AI ratio and LDL cholesterol/HDL cholesterol ratio were 6.3 and 2.7 (95% CI, 1.9 to 21 and 1.1 to 6.5), respectively. When polymorphisms in genes for hepatic lipase, endothelial lipase, and cholesteryl ester transfer protein were analyzed, patients differed significantly from controls in the allelic frequency for the TaqI B1/B2 polymorphism in cholesteryl ester transfer protein, consistent with the observed pattern of lower HDL and higher LDL. Conclusions-Venous thrombosis in men aged <55 years old is associated with dyslipoproteinemia involving lower levels of HDL particles, elevated levels of small LDL particles, and an elevated ratio of apolipoprotein B/apolipoprotein AI. This dyslipoproteinemia seems associated with a related cholesteryl ester transfer protein genotype difference. © 2005 American Heart Association, Inc.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.
Resumo:
Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion’s dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empirical version of Rademacher averages, in the sense that the Rademacher averages are computed from the data, on a subset of functions with small empirical error. We present some applications to classification and prediction with convex function classes, and with kernel classes in particular.
Resumo:
Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying general optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion's dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.
Resumo:
Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space -- classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semi-definite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -- using the labelled part of the data one can learn an embedding also for the unlabelled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method to learn the 2-norm soft margin parameter in support vector machines, solving another important open problem. Finally, the novel approach presented in the paper is supported by positive empirical results.
Resumo:
We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.