852 resultados para Sensitivity kernel
Resumo:
An automatic algorithm is derived for constructing kernel density estimates based on a regression approach that directly optimizes generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. Local regularization is incorporated into the density construction process to further enforce sparsity. Examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample Parzen window density estimate.
Resumo:
This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.
Resumo:
Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Nonlinear system identification is considered using a generalized kernel regression model. Unlike the standard kernel model, which employs a fixed common variance for all the kernel regressors, each kernel regressor in the generalized kernel model has an individually tuned diagonal covariance matrix that is determined by maximizing the correlation between the training data and the regressor using a repeated guided random search based on boosting optimization. An efficient construction algorithm based on orthogonal forward regression with leave-one-out (LOO) test statistic and local regularization (LR) is then used to select a parsimonious generalized kernel regression model from the resulting full regression matrix. The proposed modeling algorithm is fully automatic and the user is not required to specify any criterion to terminate the construction procedure. Experimental results involving two real data sets demonstrate the effectiveness of the proposed nonlinear system identification approach.
Resumo:
A greedy technique is proposed to construct parsimonious kernel classifiers using the orthogonal forward selection method and boosting based on Fisher ratio for class separability measure. Unlike most kernel classification methods, which restrict kernel means to the training input data and use a fixed common variance for all the kernel terms, the proposed technique can tune both the mean vector and diagonal covariance matrix of individual kernel by incrementally maximizing Fisher ratio for class separability measure. An efficient weighted optimization method is developed based on boosting to append kernels one by one in an orthogonal forward selection procedure. Experimental results obtained using this construction technique demonstrate that it offers a viable alternative to the existing state-of-the-art kernel modeling methods for constructing sparse Gaussian radial basis function network classifiers. that generalize well.
Resumo:
We propose a simple yet computationally efficient construction algorithm for two-class kernel classifiers. In order to optimise classifier's generalisation capability, an orthogonal forward selection procedure is used to select kernels one by one by minimising the leave-one-out (LOO) misclassification rate directly. It is shown that the computation of the LOO misclassification rate is very efficient owing to orthogonalisation. Examples are used to demonstrate that the proposed algorithm is a viable alternative to construct sparse two-class kernel classifiers in terms of performance and computational efficiency.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.
Resumo:
The production and release of dissolved organic carbon (DOC) from peat soils is thought to be sensitive to changes in climate, specifically changes in temperature and rainfall. However, little is known about the actual rates of net DOC production in response to temperature and water table draw-down, particularly in comparison to carbon dioxide (CO2) fluxes. To explore these relationships, we carried out a laboratory experiment on intact peat soil cores under controlled temperature and water table conditions to determine the impact and interaction of each of these climatic factors on net DOC production. We found a significant interaction (P < 0.001) between temperature, water table draw-down and net DOC production across the whole soil core (0 to −55 cm depth). This corresponded to an increase in the Q10 (i.e. rise in the rate of net DOC production over a 10 °C range) from 1.84 under high water tables and anaerobic conditions to 3.53 under water table draw-down and aerobic conditions between −10 and − 40 cm depth. However, increases in net DOC production were only seen after water tables recovered to the surface as secondary changes in soil water chemistry driven by sulphur redox reactions decreased DOC solubility, and therefore DOC concentrations, during periods of water table draw-down. Furthermore, net microbial consumption of DOC was also apparent at − 1 cm depth and was an additional cause of declining DOC concentrations during dry periods. Therefore, although increased temperature and decreased rainfall could have a significant effect on net DOC release from peatlands, these climatic effects could be masked by other factors controlling the biological consumption of DOC in addition to soil water chemistry and DOC solubility. These findings highlight both the sensitivity of DOC release from ombrotrophic peat to episodic changes in water table draw-down, and the need to disentangle complex and interacting controls on DOC dynamics to fully understand the impact of environmental change on this system.
Resumo:
Background: Insulin sensitivity (Si) is improved by weight loss and exercise, but the effects of the replacement of saturated fatty acids (SFAs) with monounsaturated fatty acids (MUFAs) or carbohydrates of high glycemic index (HGI) or low glycemic index (LGI) are uncertain. Objective: We conducted a dietary intervention trial to study these effects in participants at risk of developing metabolic syndrome. Design: We conducted a 5-center, parallel design, randomized controlled trial [RISCK (Reading, Imperial, Surrey, Cambridge, and Kings)]. The primary and secondary outcomes were changes in Si (measured by using an intravenous glucose tolerance test) and cardiovascular risk factors. Measurements were made after 4 wk of a high-SFA and HGI (HS/HGI) diet and after a 24-wk intervention with HS/HGI (reference), high-MUFA and HGI (HM/HGI), HM and LGI (HM/LGI), low-fat and HGI (LF/HGI), and LF and LGI (LF/LGI) diets. Results: We analyzed data for 548 of 720 participants who were randomly assigned to treatment. The median Si was 2.7 × 10−4 mL · μU−1 · min−1 (interquartile range: 2.0, 4.2 × 10−4 mL · μU−1 · min−1), and unadjusted mean percentage changes (95% CIs) after 24 wk treatment (P = 0.13) were as follows: for the HS/HGI group, −4% (−12.7%, 5.3%); for the HM/HGI group, 2.1% (−5.8%, 10.7%); for the HM/LGI group, −3.5% (−10.6%, 4.3%); for the LF/HGI group, −8.6% (−15.4%, −1.1%); and for the LF/LGI group, 9.9% (2.4%, 18.0%). Total cholesterol (TC), LDL cholesterol, and apolipoprotein B concentrations decreased with SFA reduction. Decreases in TC and LDL-cholesterol concentrations were greater with LGI. Fat reduction lowered HDL cholesterol and apolipoprotein A1 and B concentrations. Conclusions: This study did not support the hypothesis that isoenergetic replacement of SFAs with MUFAs or carbohydrates has a favorable effect on Si. Lowering GI enhanced reductions in TC and LDL-cholesterol concentrations in subjects, with tentative evidence of improvements in Si in the LF-treatment group. This trial was registered at clinicaltrials.gov as ISRCTN29111298.
Resumo:
Substituted amphetamines such as p-chloroamphetamine and the abused drug methylenedioxymethamphetamine cause selective destruction of serotonin axons in rats, by unknown mechanisms. Since some serotonin neurones also express neuronal nitric oxide synthase, which has been implicated in neurotoxicity, the present study was undertaken to determine whether nitric oxide synthase expressing serotonin neurones are selectively vulnerable to methylenedioxymethamphetamine or p-chloroamphetamine. Using double-labeling immunocytochemistry and double in situ hybridization for nitric oxide synthase and the serotonin transporter, it was confirmed that about two thirds of serotonergic cell bodies in the dorsal raphe nucleus expressed nitric oxide synthase, however few if any serotonin transporter immunoreactive axons in striatum expressed nitric oxide synthase at detectable levels. Methylenedioxymethamphetamine (30 mg/kg) or p-chloroamphetamine (2 x 10 mg/kg) was administered to Sprague-Dawley rats, and 7 days after drug administration there were modest decreases in the levels of serotonin transporter protein in frontal cortex, and striatum using Western blotting, even though axonal loss could be clearly seen by immunostaining. p-Chloroamphetamine or methylenedioxymethamphetamine administration did not alter the level of nitric oxide synthase in striatum or frontal cortex, determined by Western blotting. Analysis of serotonin neuronal cell bodies 7 days after p-chloroamphetamine treatment, revealed a net down-regulation of serotonin transporter mRNA levels, and a profound change in expression of nitric oxide synthase, with 33% of serotonin transporter mRNA positive cells containing nitric oxide synthase mRNA, compared with 65% in control animals. Altogether these results support the hypothesis that serotonin neurones which express nitric oxide synthase are most vulnerable to substituted amphetamine toxicity, supporting the concept that the selective vulnerability of serotonin neurones has a molecular basis.
Resumo:
Background:Excessive energy intake and obesity lead to the metabolic syndrome (MetS). Dietary saturated fatty acids (SFAs) may be particularly detrimental on insulin sensitivity (SI) and on other components of the MetS. Objective:This study determined the relative efficacy of reducing dietary SFA, by isoenergetic alteration of the quality and quantity of dietary fat, on risk factors associated with MetS. Design:A free-living, single-blinded dietary intervention study. Subjects and Methods:MetS subjects (n=417) from eight European countries completed the randomized dietary intervention study with four isoenergetic diets distinct in fat quantity and quality: high-SFA; high-monounsaturated fatty acids and two low-fat, high-complex carbohydrate (LFHCC) diets, supplemented with long chain n-3 polyunsaturated fatty acids (LC n-3 PUFAs) (1.2 g per day) or placebo for 12 weeks. SI estimated from an intravenous glucose tolerance test (IVGTT) was the primary outcome measure. Lipid and inflammatory markers associated with MetS were also determined. Results:In weight-stable subjects, reducing dietary SFA intake had no effect on SI, total and low-density lipoprotein cholesterol concentration, inflammation or blood pressure in the entire cohort. The LFHCC n-3 PUFA diet reduced plasma triacylglycerol (TAG) and non-esterified fatty acid concentrations (P<0.01), particularly in men. Conclusion:There was no effect of reducing SFA on SI in weight-stable obese MetS subjects. LC n-3 PUFA supplementation, in association with a low-fat diet, improved TAG-related MetS risk profiles.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.