908 resultados para partial least-squares regression


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need for monotone approximation of scattered data often arises in many problems of regression, when the monotonicity is semantically important. One such domain is fuzzy set theory, where membership functions and aggregation operators are order preserving. Least squares polynomial splines provide great flexbility when modeling non-linear functions, but may fail to be monotone. Linear restrictions on spline coefficients provide necessary and sufficient conditions for spline monotonicity. The basis for splines is selected in such a way that these restrictions take an especially simple form. The resulting non-negative least squares problem can be solved by a variety of standard proven techniques. Additional interpolation requirements can also be imposed in the same framework. The method is applied to fuzzy systems, where membership functions and aggregation operators are constructed from empirical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Splines with free knots have been extensively studied in regard to calculating the optimal knot positions. The dependence of the accuracy of approximation on the knot distribution is highly nonlinear, and optimisation techniques face a difficult problem of multiple local minima. The domain of the problem is a simplex, which adds to the complexity. We have applied a recently developed cutting angle method of deterministic global optimisation, which allows one to solve a wide class of optimisation problems on a simplex. The results of the cutting angle method are subsequently improved by local discrete gradient method. The resulting algorithm is sufficiently fast and guarantees that the global minimum has been reached. The results of numerical experiments are presented.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work simulations of incompressible fluid flows have been done by a Least Squares Finite Element Method (LSFEM) using velocity-pressure-vorticity and velocity-pressure-stress formulations, named u-p-ω) and u-p-τ formulations respectively. These formulations are preferred because the resulting equations are partial differential equations of first order, which is convenient for implementation by LSFEM. The main purposes of this work are the numerical computation of laminar, transitional and turbulent fluid flows through the application of large eddy simulation (LES) methodology using the LSFEM. The Navier-Stokes equations in u-p-ω and u-p-τ formulations are filtered and the eddy viscosity model of Smagorinsky is used for modeling the sub-grid-scale stresses. Some benchmark problems are solved for validate the numerical code and the preliminary results are presented and compared with available results from the literature. Copyright © 2005 by ABCM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary: 62M10, 62J02, 62F12, 62M05, 62P05, 62P10; secondary: 60G46, 60F15.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An unstructured mesh �nite volume discretisation method for simulating di�usion in anisotropic media in two-dimensional space is discussed. This technique is considered as an extension of the fully implicit hybrid control-volume �nite-element method and it retains the local continuity of the ux at the control volume faces. A least squares function recon- struction technique together with a new ux decomposition strategy is used to obtain an accurate ux approximation at the control volume face, ensuring that the overall accuracy of the spatial discretisation maintains second order. This paper highlights that the new technique coincides with the traditional shape function technique when the correction term is neglected and that it signi�cantly increases the accuracy of the previous linear scheme on coarse meshes when applied to media that exhibit very strong to extreme anisotropy ratios. It is concluded that the method can be used on both regular and irregular meshes, and appears independent of the mesh quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel voltammetric method for simultaneous determination of the glucocorticoid residues prednisone, prednisolone, and dexamethasone was developed. All three compounds were reduced at a mercury electrode in a Britton-Robinson buffer (pH 3.78), and well-defined voltammetric waves were observed. However, the voltammograms of these three compounds overlapped seriously and showed nonlinear character, and thus, it was difficult to analyze the compounds individually in their mixtures. In this work, two chemometrics methods, principal component regression (PCR) and partial least squares (PLS), were applied to resolve the overlapped voltammograms, and the calibration models were established for simultaneous determination of these compounds. Under the optimum experimental conditions, the limits of detection (LOD) were 5.6, 8.3, and 16.8 µg l-1 for prednisone, prednisolone, and dexamethasone, respectively. The proposed method was also applied for the determination of these glucocorticoid residues in the rabbit plasma and human urine samples with satisfactory results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simple and sensitive spectrophotometric method for the simultaneous determination of acesulfame-K, sodium cyclamate and saccharin sodium sweeteners in foodstuff samples has been researched and developed. This analytical method relies on the different kinetic rates of the analytes in their oxidative reaction with KMnO4 to produce the green manganate product in an alkaline solution. As the kinetic rates of acesulfame-K, sodium cyclamate and saccharin sodium were similar and their kinetic data seriously overlapped, chemometrics methods, such as partial least squares (PLS), principal component regression (PCR) and classical least squares (CLS), were applied to resolve the kinetic data. The results showed that the PLS prediction model performed somewhat better. The proposed method was then applied for the determination of the three sweeteners in foodstuff samples, and the results compared well with those obtained by the reference HPLC method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fast and accurate procedure has been researched and developed for the simultaneous determination of maltol and ethyl maltol, based on their reaction with iron(III) in the presence of o-phenanthroline in sulfuric acid medium. This reaction was the basis for an indirect kinetic spectrophotometric method, which followed the development of the pink ferroin product (λmax = 524 nm). The kinetic data were collected in the 370–900 nm range over 0–30 s. The optimized method indicates that individual analytes followed Beer’s law in the concentration range of 4.0–76.0 mg L−1 for both maltol and ethyl maltol. The LOD values of 1.6 mg L−1 for maltol and 1.4 mg L−1 for ethyl maltol agree well with those obtained by the alternative high performance liquid chromatography with ultraviolet detection (HPLC-UV). Three chemometrics methods, principal component regression (PCR), partial least squares (PLS) and principal component analysis–radial basis function–artificial neural networks (PC–RBF–ANN), were used to resolve the measured data with small kinetic differences between the two analytes as reflected by the development of the pink ferroin product. All three performed satisfactorily in the case of the synthetic verification samples, and in their application for the prediction of the analytes in several food products. The figures of merit for the analytes based on the multivariate models agreed well with those from the alternative HPLC-UV method involving the same samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A spectrophotometric method for the simultaneous determination of the important pharmaceuticals, pefloxacin and its structurally similar metabolite, norfloxacin, is described for the first time. The analysis is based on the monitoring of a kinetic spectrophotometric reaction of the two analytes with potassium permanganate as the oxidant. The measurement of the reaction process followed the absorbance decrease of potassium permanganate at 526 nm, and the accompanying increase of the product, potassium manganate, at 608 nm. It was essential to use multivariate calibrations to overcome severe spectral overlaps and similarities in reaction kinetics. Calibration curves for the individual analytes showed linear relationships over the concentration ranges of 1.0–11.5 mg L−1 at 526 and 608 nm for pefloxacin, and 0.15–1.8 mg L−1 at 526 and 608 nm for norfloxacin. Various multivariate calibration models were applied, at the two analytical wavelengths, for the simultaneous prediction of the two analytes including classical least squares (CLS), principal component regression (PCR), partial least squares (PLS), radial basis function-artificial neural network (RBF-ANN) and principal component-radial basis function-artificial neural network (PC-RBF-ANN). PLS and PC-RBF-ANN calibrations with the data collected at 526 nm, were the preferred methods—%RPET not, vert, similar 5, and LODs for pefloxacin and norfloxacin of 0.36 and 0.06 mg L−1, respectively. Then, the proposed method was applied successfully for the simultaneous determination of pefloxacin and norfloxacin present in pharmaceutical and human plasma samples. The results compared well with those from the alternative analysis by HPLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.