57 resultados para kernel estimate
Resumo:
OBJECTIVE: To compare insulin sensitivity (Si) from a frequently sampled intravenous glucose tolerance test (FSIGT) and subsequent minimal model analyses with surrogate measures of insulin sensitivity and resistance and to compare features of the metabolic syndrome between Caucasians and Indian Asians living in the UK. SUBJECTS: In all, 27 healthy male volunteers (14 UK Caucasians and 13 UK Indian Asians), with a mean age of 51.2 +/- 1.5 y, BMI of 25.8 +/- 0.6 kg/m(2) and Si of 2.85 +/- 0.37. MEASUREMENTS: Si was determined from an FSIGT with subsequent minimal model analysis. The concentrations of insulin, glucose and nonesterified fatty acids (NEFA) were analysed in fasting plasma and used to calculate surrogate measure of insulin sensitivity (quantitative insulin sensitivity check index (QUICKI), revised QUICKI) and resistance (homeostasis for insulin resistance (HOMA IR), fasting insulin resistance index (FIRI), Bennetts index, fasting insulin, insulin-to-glucose ratio). Plasma concentrations of triacylglycerol (TAG), total cholesterol, high density cholesterol, (HDL-C) and low density cholesterol, (LDL-C) were also measured in the fasted state. Anthropometric measurements were conducted to determine body-fat distribution. RESULTS: Correlation analysis identified the strongest relationship between Si and the revised QUICKI (r = 0.67; P = 0.000). Significant associations were also observed between Si and QUICKI (r = 0.51; P = 0.007), HOMA IR (r = -0.50; P = 0.009), FIRI and fasting insulin. The Indian Asian group had lower HDL-C (P = 0.001), a higher waist-hip ratio (P = 0.01) and were significantly less insulin sensitive (Si) than the Caucasian group (P = 0.02). CONCLUSION: The revised QUICKI demonstrated a statistically strong relationship with the minimal model. However, it was unable to differentiate between insulin-sensitive and -resistant groups in this study. Future larger studies in population groups with varying degrees of insulin sensitivity are recommended to investigate the general applicability of the revised QUICKI surrogate technique.
Resumo:
A unified approach is proposed for sparse kernel data modelling that includes regression and classification as well as probability density function estimation. The orthogonal-least-squares forward selection method based on the leave-one-out test criteria is presented within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic sparse kernel data modelling approach.
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.
Resumo:
A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.
Resumo:
We propose a simple yet computationally efficient construction algorithm for two-class kernel classifiers. In order to optimise classifier's generalisation capability, an orthogonal forward selection procedure is used to select kernels one by one by minimising the leave-one-out (LOO) misclassification rate directly. It is shown that the computation of the LOO misclassification rate is very efficient owing to orthogonalisation. Examples are used to demonstrate that the proposed algorithm is a viable alternative to construct sparse two-class kernel classifiers in terms of performance and computational efficiency.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
This paper examines if shell oxygen isotope ratios (d18Oar) of Unio sp. can be used as a proxy of past discharge of the river Meuse. The proxy was developed from a modern dataset for the reference time interval 1997–2007, which showed a logarithmic relationship between discharge and measured water oxygen isotope ratios(d18Ow). To test this relationship for past time intervals,d18Oar values were measured in the aragonite of the growth increments of four Unio sp. shells; two from a relatively wet period and two from a very dry time interval (1910–1918 and 1969–1977, respectively). Shell d18Oar records were converted into d18Ow values using existing water temperature records. Summer d18Ow values, reconstructed from d18Oar of 1910–1918, showed a similar range as the summer d18Ow values for the reference time interval 1997–2007, whilst summer reconstructed d18Ow values for the time interval 1969–1977 were anomalously high. These high d18Ow values suggest that the river Meuse experienced severe summer droughts during the latter time interval. d18Ow values were then applied to calculate discharge values. It was attempted to estimate discharge from the reconstructed d18Ow values using the logarithmic relationship between d18Ow and discharge. A comparison of the calculated summer discharge results with observed discharge data showed that Meuse low-discharge events below a threshold value of 6 m3/s can be detected in the reconstructed d18Ow records, but true quantification remains problematic.