95 resultados para least weighted squares


Relevância:

40.00% 40.00%

Publicador:

Resumo:

As a promising method for pattern recognition and function estimation, least squares support vector machines (LS-SVM) express the training in terms of solving a linear system instead of a quadratic programming problem as for conventional support vector machines (SVM). In this paper, by using the information provided by the equality constraint, we transform the minimization problem with a single equality constraint in LS-SVM into an unconstrained minimization problem, then propose reduced formulations for LS-SVM. By introducing this transformation, the times of using conjugate gradient (CG) method, which is a greatly time-consuming step in obtaining the numerical solution, are reduced to one instead of two as proposed by Suykens et al. (1999). The comparison on computational speed of our method with the CG method proposed by Suykens et al. and the first order and second order SMO methods on several benchmark data sets shows a reduction of training time by up to 44%. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study presents a model based on partial least squares (PLS) regression for dynamic line rating (DLR). The model has been verified using data from field measurements, lab tests and outdoor experiments. Outdoor experimentation has been conducted both to verify the model predicted DLR and also to provide training data not available from field measurements, mainly heavily loaded conditions. The proposed model, unlike the direct measurement based DLR techniques, enables prediction of line rating for periods ahead of time whenever a reliable weather forecast is available. The PLS approach yields a very simple statistical model that accurately captures the physical performance of the conductor within a given environment without requiring a predetermination of parameters as required by many physical modelling techniques. Accuracy of the PLS model has been tested by predicting the conductor temperature for measurement sets other than those used for training. Being a linear model, it is straightforward to estimate the conductor ampacity for a set of predicted weather parameters. The PLS estimated ampacity has proven its accuracy through an outdoor experiment on a piece of the line conductor in real weather conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a statistical model for the thermal behaviour of the line model based on lab tests and field measurements. This model is based on Partial Least Squares (PLS) multi regression and is used for the Dynamic Line Rating (DLR) in a wind intensive area. DLR provides extra capacity to the line, over the traditional seasonal static rating, which makes it possible to defer the need for reinforcement the existing network or building new lines. The proposed PLS model has a number of appealing features; the model is linear, so it is straightforward to use for predicting the line rating for future periods using the available weather forecast. Unlike the available physical models, the proposed model does not require any physical parameters of the line, which avoids the inaccuracies resulting from the errors and/or variations in these parameters. The developed model is compared with physical model, the Cigre model, and has shown very good accuracy in predicting the conductor temperature as well as in determining the line rating for future time periods. 

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper formulates a linear kernel support vector machine (SVM) as a regularized least-squares (RLS) problem. By defining a set of indicator variables of the errors, the solution to the RLS problem is represented as an equation that relates the error vector to the indicator variables. Through partitioning the training set, the SVM weights and bias are expressed analytically using the support vectors. It is also shown how this approach naturally extends to Sums with nonlinear kernels whilst avoiding the need to make use of Lagrange multipliers and duality theory. A fast iterative solution algorithm based on Cholesky decomposition with permutation of the support vectors is suggested as a solution method. The properties of our SVM formulation are analyzed and compared with standard SVMs using a simple example that can be illustrated graphically. The correctness and behavior of our solution (merely derived in the primal context of RLS) is demonstrated using a set of public benchmarking problems for both linear and nonlinear SVMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract An HPLC method has been developed and validated for the determination of spironolactone, 7a-thiomethylspirolactone and canrenone in paediatric plasma samples. The method utilises 200 µl of plasma and sample preparation involves protein precipitation followed by Solid Phase Extraction (SPE). Determination of standard curves of peak height ratio (PHR) against concentration was performed by weighted least squares linear regression using a weighting factor of 1/concentration2. The developed method was found to be linear over concentration ranges of 30–1000 ng/ml for spironolactone and 25–1000 ng/ml for 7a-thiomethylspirolactone and canrenone. The lower limit of quantification for spironolactone, 7a-thiomethylspirolactone and canrenone were calculated as 28, 20 and 25 ng/ml, respectively. The method was shown to be applicable to the determination of spironolactone, 7a-thiomethylspirolactone and canrenone in paediatric plasma samples and also plasma from healthy human volunteers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To identify demographic and socioeconomic determinants of need for acute hospital treatment at small area level. To establish whether there is a relation between poverty and use of inpatient services. To devise a risk adjustment formula for distributing public funds for hospital services using, as far as possible, variables that can be updated between censuses. Design: Cross sectional analysis. Spatial interactive modelling was used to quantify the proximity of the population to health service facilities. Two stage weighted least squares regression was used to model use against supply of hospital and community services and a wide range of potential needs drivers including health, socioeconomic census variables, uptake of income support and family credit, and religious denomination. Setting: Northern Ireland. Main outcome measure: Intensity of use of inpatient services. Results: After endogeneity of supply and use was taken into account, a statistical model was produced that predicted use based on five variables: income support, family credit, elderly people living alone, all ages standardised mortality ratio, and low birth weight. The main effect of the formula produced is to move resources from urban to rural areas. Conclusions: This work has produced a population risk adjustment formula for acute hospital treatment in which four of the five variables can be updated annually rather than relying on census derived data. Inclusion of the social security data makes a substantial difference to the model and to the results produced by the formula.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of chironomid taxa and environmental datasets from 46 New Zealand lakes identified temperature (February mean air temperature) and lake production (chlorophyll a (Chl a)) as the main drivers of chironomid distribution. Temperature was the strongest driver of chironomid distribution and consequently produced the most robust inference models. We present two possible temperature transfer functions from this dataset. The most robust model (weighted averaging-partial least squares (WA-PLS), n = 36) was based on a dataset with the most productive (Chl a > 10 lg l)1) lakes removed. This model produced a coefficient of determination (r2 jack) of 0.77, and a root mean squared error of prediction (RMSEPjack) of 1.31C. The Chl a transfer function (partial least squares (PLS), n = 37) was far less reliable, with an r2 jack of 0.49 and an RMSEPjack of 0.46 Log10lg l)1. Both of these transfer functions could be improved by a revision of the taxonomy for the New Zealand chironomid taxa, particularly the genus Chironomus. The Chironomus morphotype was common in high altitude, cool, oligotrophic lakes and lowland, warm, eutrophic lakes. This could reflect the widespread distribution of one eurythermic species, or the collective distribution of a number of different Chironomus species with more limited tolerances. The Chl a transfer function could also be improved by inputting mean Chl a values into the inference model rather than the spot measurements that were available for this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with Takagi-Sugeno (TS) fuzzy model identification of nonlinear systems using fuzzy clustering. In particular, an extended fuzzy Gustafson-Kessel (EGK) clustering algorithm, using robust competitive agglomeration (RCA), is developed for automatically constructing a TS fuzzy model from system input-output data. The EGK algorithm can automatically determine the 'optimal' number of clusters from the training data set. It is shown that the EGK approach is relatively insensitive to initialization and is less susceptible to local minima, a benefit derived from its agglomerate property. This issue is often overlooked in the current literature on nonlinear identification using conventional fuzzy clustering. Furthermore, the robust statistical concepts underlying the EGK algorithm help to alleviate the difficulty of cluster identification in the construction of a TS fuzzy model from noisy training data. A new hybrid identification strategy is then formulated, which combines the EGK algorithm with a locally weighted, least-squares method for the estimation of local sub-model parameters. The efficacy of this new approach is demonstrated through function approximation examples and also by application to the identification of an automatic voltage regulation (AVR) loop for a simulated 3 kVA laboratory micro-machine system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimation and detection of the hemodynamic response (HDR) are of great importance in functional MRI (fMRI) data analysis. In this paper, we propose the use of three H 8 adaptive filters (finite memory, exponentially weighted, and time-varying) for accurate estimation and detection of the HDR. The H 8 approach is used because it safeguards against the worst case disturbances and makes no assumptions on the (statistical) nature of the signals [B. Hassibi and T. Kailath, in Proc. ICASSP, 1995, vol. 2, pp. 949-952; T. Ratnarajah and S. Puthusserypady, in Proc. 8th IEEE Workshop DSP, 1998, pp. 1483-1487]. Performances of the proposed techniques are compared to the conventional t-test method as well as the well-known LMSs and recursive least squares algorithms. Extensive numerical simulations show that the proposed methods result in better HDR estimations and activation detections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tropical peatlands represent globally important carbon sinks with a unique biodiversity and are currently threatened by climate change and human activities. It is now imperative that proxy methods are developed to understand the ecohydrological dynamics of these systems and for testing peatland development models. Testate amoebae have been used as environmental indicators in ecological and palaeoecological studies of peatlands, primarily in ombrotrophic Sphagnum-dominated peatlands in the mid- and high-latitudes. We present the first ecological analysis of testate amoebae in a tropical peatland, a nutrient-poor domed bog in western (Peruvian) Amazonia. Litter samples were collected from different hydrological microforms (hummock to pool) along a transect from the edge to the interior of the peatland. We recorded 47 taxa from 21 genera. The most common taxa are Cryptodifflugia oviformis, Euglypha rotunda type, Phryganella acropodia, Pseudodifflugia fulva type and Trinema lineare. One species found only in the southern hemisphere, Argynnia spicata, is present. Arcella spp., Centropyxis aculeata and Lesqueresia spiralis are indicators of pools containing standing water. Canonical correspondence analysis and non-metric multidimensional scaling illustrate that water table depth is a significant control on the distribution of testate amoebae, similar to the results from mid- and high-latitude peatlands. A transfer function model for water table based on weighted averaging partial least-squares (WAPLS) regression is presented and performs well under cross-validation (r 2apparent=0.76,RMSE=4.29;r2jack=0.68,RMSEP=5.18. The transfer function was applied to a 1-m peat core, and sample-specific reconstruction errors were generated using bootstrapping. The reconstruction generally suggests near-surface water tables over the last 3,000 years, with a shift to drier conditions at c. cal. 1218-1273 AD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A geostatistical version of the classical Fisher rule (linear discriminant analysis) is presented.This method is applicable when a large dataset of multivariate observations is available within a domain split in several known subdomains, and it assumes that the variograms (or covariance functions) are comparable between subdomains, which only differ in the mean values of the available variables. The method consists on finding the eigen-decomposition of the matrix W-1B, where W is the matrix of sills of all direct- and cross-variograms, and B is the covariance matrix of the vectors of weighted means within each subdomain, obtained by generalized least squares. The method is used to map peat blanket occurrence in Northern Ireland, with data from the Tellus
survey, which requires a minimal change to the general recipe: to use compositionally-compliant variogram tools and models, and work with log-ratio transformed data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation sDEd approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Hypertension and cognitive impairment are prevalent in older people. It is known that hypertension is a direct risk factor for vascular dementia and recent studies have suggested hypertension also impacts upon prevalence of Alzheimer's disease. The question is therefore whether treatment of hypertension lowers the rate of cognitive decline. OBJECTIVES: To assess the effects of blood pressure lowering treatments for the prevention of dementia and cognitive decline in patients with hypertension but no history of cerebrovascular disease. SEARCH STRATEGY: The trials were identified through a search of CDCIG's Specialised Register, CENTRAL, MEDLINE, EMBASE, PsycINFO and CINAHL on 27 April 2005. SELECTION CRITERIA: Randomized, double-blind, placebo controlled trials in which pharmacological or non-pharmacological interventions to lower blood pressure were given for at least six months. DATA COLLECTION AND ANALYSIS: Two independent reviewers assessed trial quality and extracted data. The following outcomes were assessed: incidence of dementia, cognitive change from baseline, blood pressure level, incidence and severity of side effects and quality of life. MAIN RESULTS: Three trials including 12,091 hypertensive subjects were identified. Average age was 72.8 years. Participants were recruited from industrialised countries. Mean blood pressure at entry across the studies was 170/84 mmHg. All trials instituted a stepped care approach to hypertension treatment, starting with a calcium-channel blocker, a diuretic or an angiotensin receptor blocker. The combined result of the three trials reporting incidence of dementia indicated no significant difference between treatment and placebo (Odds Ratio (OR) = 0.89, 95% CI 0.69, 1.16). Blood pressure reduction resulted in a 11% relative risk reduction of dementia in patients with no prior cerebrovascular disease but this effect was not statistically significant (p = 0.38) and there was considerable heterogeneity between the trials. The combined results from the two trials reporting change in Mini Mental State Examination (MMSE) did not indicate a benefit from treatment (Weighted Mean Difference (WMD) = 0.10, 95% CI -0.03, 0.23). Both systolic and diastolic blood pressure levels were reduced significantly in the two trials assessing this outcome (WMD = -7.53, 95% CI -8.28, -6.77 for systolic blood pressure, WMD = -3.87, 95% CI -4.25, -3.50 for diastolic blood pressure).Two trials reported adverse effects requiring discontinuation of treatment and the combined results indicated a significant benefit from placebo (OR = 1.18, 95% CI 1.06, 1.30). When analysed separately, however, more patients on placebo in SCOPE were likely to discontinue treatment due to side effects; the converse was true in SHEP 1991. Quality of life data could not be analysed in the three studies. There was difficulty with the control group in this review as many of the control subjects received antihypertensive treatment because their blood pressures exceeded pre-set values. In most cases the study became a comparison between the study drug against a usual antihypertensive regimen. AUTHORS' CONCLUSIONS: There was no convincing evidence from the trials identified that blood pressure lowering prevents the development of dementia or cognitive impairment in hypertensive patients with no apparent prior cerebrovascular disease. There were significant problems identified with analysing the data, however, due to the number of patients lost to follow-up and the number of placebo patients given active treatment. This introduced bias. More robust results may be obtained by analysing one year data to reduce differential drop-out or by conducting a meta-analysis using individual patient data.