60 resultados para Least-squares technique
Resumo:
This paper presents a statistical model for the thermal behaviour of the line model based on lab tests and field measurements. This model is based on Partial Least Squares (PLS) multi regression and is used for the Dynamic Line Rating (DLR) in a wind intensive area. DLR provides extra capacity to the line, over the traditional seasonal static rating, which makes it possible to defer the need for reinforcement the existing network or building new lines. The proposed PLS model has a number of appealing features; the model is linear, so it is straightforward to use for predicting the line rating for future periods using the available weather forecast. Unlike the available physical models, the proposed model does not require any physical parameters of the line, which avoids the inaccuracies resulting from the errors and/or variations in these parameters. The developed model is compared with physical model, the Cigre model, and has shown very good accuracy in predicting the conductor temperature as well as in determining the line rating for future time periods.
Resumo:
Sparse representation based visual tracking approaches have attracted increasing interests in the community in recent years. The main idea is to linearly represent each target candidate using a set of target and trivial templates while imposing a sparsity constraint onto the representation coefficients. After we obtain the coefficients using L1-norm minimization methods, the candidate with the lowest error, when it is reconstructed using only the target templates and the associated coefficients, is considered as the tracking result. In spite of promising system performance widely reported, it is unclear if the performance of these trackers can be maximised. In addition, computational complexity caused by the dimensionality of the feature space limits these algorithms in real-time applications. In this paper, we propose a real-time visual tracking method based on structurally random projection and weighted least squares techniques. In particular, to enhance the discriminative capability of the tracker, we introduce background templates to the linear representation framework. To handle appearance variations over time, we relax the sparsity constraint using a weighed least squares (WLS) method to obtain the representation coefficients. To further reduce the computational complexity, structurally random projection is used to reduce the dimensionality of the feature space while preserving the pairwise distances between the data points in the feature space. Experimental results show that the proposed approach outperforms several state-of-the-art tracking methods.
Resumo:
This paper formulates a linear kernel support vector machine (SVM) as a regularized least-squares (RLS) problem. By defining a set of indicator variables of the errors, the solution to the RLS problem is represented as an equation that relates the error vector to the indicator variables. Through partitioning the training set, the SVM weights and bias are expressed analytically using the support vectors. It is also shown how this approach naturally extends to Sums with nonlinear kernels whilst avoiding the need to make use of Lagrange multipliers and duality theory. A fast iterative solution algorithm based on Cholesky decomposition with permutation of the support vectors is suggested as a solution method. The properties of our SVM formulation are analyzed and compared with standard SVMs using a simple example that can be illustrated graphically. The correctness and behavior of our solution (merely derived in the primal context of RLS) is demonstrated using a set of public benchmarking problems for both linear and nonlinear SVMs.
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation sDEd approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.
Resumo:
An effective ellipsometric technique to determine parameters that characterize second-harmonic optical and magneto-optical effects in centrosymmetric media within the electric-dipole approximation is proposed and outlined in detail. The parameters, which are ratios of components of the nonlinear-surface-susceptibility tensors, are obtained from experimental data related to the state of polarization of the second-harmonic-generated radiation as a function of the angle between the plane of incidence and the polarization plane of the incident, linearly polarized, fundamental radiation. Experimental details of the technique are described. A corresponding theoretical model is given as an example for a single isotropic surface assuming polycrystalline samples. The surfaces of air-Au and air-Ni (in magnetized and demagnetized states) have been investigated ex situ in ambient air, and the results are presented. A nonlinear, least-squares-minimization fitting procedure between experimental data and theoretical formulas has been shown to yield realistic, unambiguous results for the ratios corresponding to each of the above materials. Independent methods for verifying the validity of the fitting parameters are also presented. The influence of temporal variations at the surfaces on the state of polarization (due to adsorption, contamination, or oxidation) is also illustrated for the demagnetized air-Ni surface. (C) 2005 Optical Society of America
Resumo:
This paper proposes a new hierarchical learning structure, namely the holistic triple learning (HTL), for extending the binary support vector machine (SVM) to multi-classification problems. For an N-class problem, a HTL constructs a decision tree up to a depth of A leaf node of the decision tree is allowed to be placed with a holistic triple learning unit whose generalisation abilities are assessed and approved. Meanwhile, the remaining nodes in the decision tree each accommodate a standard binary SVM classifier. The holistic triple classifier is a regression model trained on three classes, whose training algorithm is originated from a recently proposed implementation technique, namely the least-squares support vector machine (LS-SVM). A major novelty with the holistic triple classifier is the reduced number of support vectors in the solution. For the resultant HTL-SVM, an upper bound of the generalisation error can be obtained. The time complexity of training the HTL-SVM is analysed, and is shown to be comparable to that of training the one-versus-one (1-vs.-1) SVM, particularly on small-scale datasets. Empirical studies show that the proposed HTL-SVM achieves competitive classification accuracy with a reduced number of support vectors compared to the popular 1-vs-1 alternative.
Resumo:
The conventional radial basis function (RBF) network optimization methods, such as orthogonal least squares or the two-stage selection, can produce a sparse network with satisfactory generalization capability. However, the RBF width, as a nonlinear parameter in the network, is not easy to determine. In the aforementioned methods, the width is always pre-determined, either by trial-and-error, or generated randomly. Furthermore, all hidden nodes share the same RBF width. This will inevitably reduce the network performance, and more RBF centres may then be needed to meet a desired modelling specification. In this paper we investigate a new two-stage construction algorithm for RBF networks. It utilizes the particle swarm optimization method to search for the optimal RBF centres and their associated widths. Although the new method needs more computation than conventional approaches, it can greatly reduce the model size and improve model generalization performance. The effectiveness of the proposed technique is confirmed by two numerical simulation examples.
Resumo:
It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.
Resumo:
We present extensive spectroscopic time series observations of the multiperiodic, rapidly rotating, delta Scuti star tau Pegasi. Information about the oscillations is contained within the patterns of line-profile variation of the star's blended absorption-line spectrum. We introduce the new technique of Doppler deconvolution with which to extract these patterns by modeling the intrinsic stellar spectrum and the broadening functions for each spectrum in the time series. Frequencies and modes of oscillation are identified from the variations using the technique of Fourier-Doppler imaging and a two-dimensional least-squares cleaning algorithm. We find a rich mode spectrum with degrees up to l = 20 and with frequencies below about 35 cycles day-1. Those modes with the largest amplitudes have frequencies that lie within a narrow band. We conclude that the observed spectrum can be explained if the modes of tau Peg propagate in the prograde direction with l ~= |m| and with frequencies that are about equal in the corotating frame of the star. We discuss the implications of these results for the prospect of delta Scuti seismology.
Resumo:
A study was undertaken to examine a range of sample preparation and near infrared reflectance spectroscopy (NIPS) methodologies, using undried samples, for predicting organic matter digestibility (OMD g kg(-1)) and ad libitum intake (g kg(-1) W-0.75) of grass silages. A total of eight sample preparation/NIRS scanning methods were examined involving three extents of silage comminution, two liquid extracts and scanning via either external probe (1100-2200 nm) or internal cell (1100-2500 nm). The spectral data (log 1/R) for each of the eight methods were examined by three regression techniques each with a range of data transformations. The 136 silages used in the study were obtained from farms across Northern Ireland, over a two year period, and had in vivo OMD (sheep) and ad libitum intake (cattle) determined under uniform conditions. In the comparisons of the eight sample preparation/scanning methods, and the differing mathematical treatments of the spectral data, the sample population was divided into calibration (n = 91) and validation (n = 45) sets. The standard error of performance (SEP) on the validation set was used in comparisons of prediction accuracy. Across all 8 sample preparation/scanning methods, the modified partial least squares (MPLS) technique, generally minimized SEP's for both OMD and intake. The accuracy of prediction also increased with degree of comminution of the forage and with scanning by internal cell rather than external probe. The system providing the lowest SEP used the MPLS regression technique on spectra from the finely milled material scanned through the internal cell. This resulted in SEP and R-2 (variance accounted for in validation set) values of 24 (g/kg OM) and 0.88 (OMD) and 5.37 (g/kg W-0.75) and 0.77 (intake) respectively. These data indicate that with appropriate techniques NIRS scanning of undried samples of grass silage can produce predictions of intake and digestibility with accuracies similar to those achieved previously using NIRS with dried samples. (C) 1998 Elsevier Science B.V.
Resumo:
Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation (DE) approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.
Resumo:
The characterization of thermocouple sensors for temperature measurement in varying-flow environments is a challenging problem. Recently, the authors introduced novel difference-equation-based algorithms that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. In particular, a linear least squares (LS) lambda formulation of the characterization problem, which yields unbiased estimates when identified using generalized total LS, was introduced. These algorithms assume that time constants do not change during operation and are, therefore, appropriate for temperature measurement in homogenous constant-velocity liquid or gas flows. This paper develops an alternative ß-formulation of the characterization problem that has the major advantage of allowing exploitation of a priori knowledge of the ratio of the sensor time constants, thereby facilitating the implementation of computationally efficient algorithms that are less sensitive to measurement noise. A number of variants of the ß-formulation are developed, and appropriate unbiased estimators are identified. Monte Carlo simulation results are used to support the analysis.
Resumo:
Abstract An HPLC method has been developed and validated for the determination of spironolactone, 7a-thiomethylspirolactone and canrenone in paediatric plasma samples. The method utilises 200 µl of plasma and sample preparation involves protein precipitation followed by Solid Phase Extraction (SPE). Determination of standard curves of peak height ratio (PHR) against concentration was performed by weighted least squares linear regression using a weighting factor of 1/concentration2. The developed method was found to be linear over concentration ranges of 30–1000 ng/ml for spironolactone and 25–1000 ng/ml for 7a-thiomethylspirolactone and canrenone. The lower limit of quantification for spironolactone, 7a-thiomethylspirolactone and canrenone were calculated as 28, 20 and 25 ng/ml, respectively. The method was shown to be applicable to the determination of spironolactone, 7a-thiomethylspirolactone and canrenone in paediatric plasma samples and also plasma from healthy human volunteers.