61 resultados para least squares learning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a statistical model for the thermal behaviour of the line model based on lab tests and field measurements. This model is based on Partial Least Squares (PLS) multi regression and is used for the Dynamic Line Rating (DLR) in a wind intensive area. DLR provides extra capacity to the line, over the traditional seasonal static rating, which makes it possible to defer the need for reinforcement the existing network or building new lines. The proposed PLS model has a number of appealing features; the model is linear, so it is straightforward to use for predicting the line rating for future periods using the available weather forecast. Unlike the available physical models, the proposed model does not require any physical parameters of the line, which avoids the inaccuracies resulting from the errors and/or variations in these parameters. The developed model is compared with physical model, the Cigre model, and has shown very good accuracy in predicting the conductor temperature as well as in determining the line rating for future time periods. 

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper formulates a linear kernel support vector machine (SVM) as a regularized least-squares (RLS) problem. By defining a set of indicator variables of the errors, the solution to the RLS problem is represented as an equation that relates the error vector to the indicator variables. Through partitioning the training set, the SVM weights and bias are expressed analytically using the support vectors. It is also shown how this approach naturally extends to Sums with nonlinear kernels whilst avoiding the need to make use of Lagrange multipliers and duality theory. A fast iterative solution algorithm based on Cholesky decomposition with permutation of the support vectors is suggested as a solution method. The properties of our SVM formulation are analyzed and compared with standard SVMs using a simple example that can be illustrated graphically. The correctness and behavior of our solution (merely derived in the primal context of RLS) is demonstrated using a set of public benchmarking problems for both linear and nonlinear SVMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a new hierarchical learning structure, namely the holistic triple learning (HTL), for extending the binary support vector machine (SVM) to multi-classification problems. For an N-class problem, a HTL constructs a decision tree up to a depth of A leaf node of the decision tree is allowed to be placed with a holistic triple learning unit whose generalisation abilities are assessed and approved. Meanwhile, the remaining nodes in the decision tree each accommodate a standard binary SVM classifier. The holistic triple classifier is a regression model trained on three classes, whose training algorithm is originated from a recently proposed implementation technique, namely the least-squares support vector machine (LS-SVM). A major novelty with the holistic triple classifier is the reduced number of support vectors in the solution. For the resultant HTL-SVM, an upper bound of the generalisation error can be obtained. The time complexity of training the HTL-SVM is analysed, and is shown to be comparable to that of training the one-versus-one (1-vs.-1) SVM, particularly on small-scale datasets. Empirical studies show that the proposed HTL-SVM achieves competitive classification accuracy with a reduced number of support vectors compared to the popular 1-vs-1 alternative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The in-line measurement of COD and NH4-N in the WWTP inflow is crucial for the timely monitoring of biological wastewater treatment processes and for the development of advanced control strategies for optimized WWTP operation. As a direct measurement of COD and NH4-N requires expensive and high maintenance in-line probes or analyzers, an approach estimating COD and NH4-N based on standard and spectroscopic in-line inflow measurement systems using Machine Learning Techniques is presented in this paper. The results show that COD estimation using Radom Forest Regression with a normalized MSE of 0.3, which is sufficiently accurate for practical applications, can be achieved using only standard in-line measurements. In the case of NH4-N, a good estimation using Partial Least Squares Regression with a normalized MSE of 0.16 is only possible based on a combination of standard and spectroscopic in-line measurements. Furthermore, the comparison of regression and classification methods shows that both methods perform equally well in most cases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes an efficient learning mechanism to build fuzzy rule-based systems through the construction of sparse least-squares support vector machines (LS-SVMs). In addition to the significantly reduced computational complexity in model training, the resultant LS-SVM-based fuzzy system is sparser while offers satisfactory generalization capability over unseen data. It is well known that the LS-SVMs have their computational advantage over conventional SVMs in the model training process; however, the model sparseness is lost, which is the main drawback of LS-SVMs. This is an open problem for the LS-SVMs. To tackle the nonsparseness issue, a new regression alternative to the Lagrangian solution for the LS-SVM is first presented. A novel efficient learning mechanism is then proposed in this paper to extract a sparse set of support vectors for generating fuzzy IF-THEN rules. This novel mechanism works in a stepwise subset selection manner, including a forward expansion phase and a backward exclusion phase in each selection step. The implementation of the algorithm is computationally very efficient due to the introduction of a few key techniques to avoid the matrix inverse operations to accelerate the training process. The computational efficiency is also confirmed by detailed computational complexity analysis. As a result, the proposed approach is not only able to achieve the sparseness of the resultant LS-SVM-based fuzzy systems but significantly reduces the amount of computational effort in model training as well. Three experimental examples are presented to demonstrate the effectiveness and efficiency of the proposed learning mechanism and the sparseness of the obtained LS-SVM-based fuzzy systems, in comparison with other SVM-based learning techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation sDEd approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract An HPLC method has been developed and validated for the determination of spironolactone, 7a-thiomethylspirolactone and canrenone in paediatric plasma samples. The method utilises 200 µl of plasma and sample preparation involves protein precipitation followed by Solid Phase Extraction (SPE). Determination of standard curves of peak height ratio (PHR) against concentration was performed by weighted least squares linear regression using a weighting factor of 1/concentration2. The developed method was found to be linear over concentration ranges of 30–1000 ng/ml for spironolactone and 25–1000 ng/ml for 7a-thiomethylspirolactone and canrenone. The lower limit of quantification for spironolactone, 7a-thiomethylspirolactone and canrenone were calculated as 28, 20 and 25 ng/ml, respectively. The method was shown to be applicable to the determination of spironolactone, 7a-thiomethylspirolactone and canrenone in paediatric plasma samples and also plasma from healthy human volunteers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: To identify demographic and socioeconomic determinants of need for acute hospital treatment at small area level. To establish whether there is a relation between poverty and use of inpatient services. To devise a risk adjustment formula for distributing public funds for hospital services using, as far as possible, variables that can be updated between censuses. Design: Cross sectional analysis. Spatial interactive modelling was used to quantify the proximity of the population to health service facilities. Two stage weighted least squares regression was used to model use against supply of hospital and community services and a wide range of potential needs drivers including health, socioeconomic census variables, uptake of income support and family credit, and religious denomination. Setting: Northern Ireland. Main outcome measure: Intensity of use of inpatient services. Results: After endogeneity of supply and use was taken into account, a statistical model was produced that predicted use based on five variables: income support, family credit, elderly people living alone, all ages standardised mortality ratio, and low birth weight. The main effect of the formula produced is to move resources from urban to rural areas. Conclusions: This work has produced a population risk adjustment formula for acute hospital treatment in which four of the five variables can be updated annually rather than relying on census derived data. Inclusion of the social security data makes a substantial difference to the model and to the results produced by the formula.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The identification of nonlinear dynamic systems using linear-in-the-parameters models is studied. A fast recursive algorithm (FRA) is proposed to select both the model structure and to estimate the model parameters. Unlike orthogonal least squares (OLS) method, FRA solves the least-squares problem recursively over the model order without requiring matrix decomposition. The computational complexity of both algorithms is analyzed, along with their numerical stability. The new method is shown to require much less computational effort and is also numerically more stable than OLS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a fast and efficient hybrid algorithm for selecting exoplanetary candidates from wide-field transit surveys. Our method is based on the widely used SysRem and Box Least-Squares (BLS) algorithms. Patterns of systematic error that are common to all stars on the frame are mapped and eliminated using the SysRem algorithm. The remaining systematic errors caused by spatially localized flat-fielding and other errors are quantified using a boxcar-smoothing method. We show that the dimensions of the search-parameter space can be reduced greatly by carrying out an initial BLS search on a coarse grid of reduced dimensions, followed by Newton-Raphson refinement of the transit parameters in the vicinity of the most significant solutions. We illustrate the method's operation by applying it to data from one field of the SuperWASP survey, comprising 2300 observations of 7840 stars brighter than V = 13.0. We identify 11 likely transit candidates. We reject stars that exhibit significant ellipsoidal variations caused indicative of a stellar-mass companion. We use colours and proper motions from the Two Micron All Sky Survey and USNO-B1.0 surveys to estimate the stellar parameters and the companion radius. We find that two stars showing unambiguous transit signals pass all these tests, and so qualify for detailed high-resolution spectroscopic follow-up.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An effective ellipsometric technique to determine parameters that characterize second-harmonic optical and magneto-optical effects in centrosymmetric media within the electric-dipole approximation is proposed and outlined in detail. The parameters, which are ratios of components of the nonlinear-surface-susceptibility tensors, are obtained from experimental data related to the state of polarization of the second-harmonic-generated radiation as a function of the angle between the plane of incidence and the polarization plane of the incident, linearly polarized, fundamental radiation. Experimental details of the technique are described. A corresponding theoretical model is given as an example for a single isotropic surface assuming polycrystalline samples. The surfaces of air-Au and air-Ni (in magnetized and demagnetized states) have been investigated ex situ in ambient air, and the results are presented. A nonlinear, least-squares-minimization fitting procedure between experimental data and theoretical formulas has been shown to yield realistic, unambiguous results for the ratios corresponding to each of the above materials. Independent methods for verifying the validity of the fitting parameters are also presented. The influence of temporal variations at the surfaces on the state of polarization (due to adsorption, contamination, or oxidation) is also illustrated for the demagnetized air-Ni surface. (C) 2005 Optical Society of America