888 resultados para Least-squares technique


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the positioning systems that compose GNSS (Global Navigation Satellite System), GPS has the capability of providing low, medium and high precision positioning data. However, GPS observables may be subject to many different types of errors. These systematic errors can degrade the accuracy of the positioning provided by GPS. These errors are mainly related to GPS satellite orbits, multipath, and atmospheric effects. In order to mitigate these errors, a semiparametric model and the penalized least squares technique were employed in this study. This is similar to changing the stochastical model, in which error functions are incorporated and the results are similar to those in which the functional model is changed instead. Using this method, it was shown that ambiguities and the estimation of station coordinates were more reliable and accurate than when employing a conventional least squares methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An unstructured mesh �nite volume discretisation method for simulating di�usion in anisotropic media in two-dimensional space is discussed. This technique is considered as an extension of the fully implicit hybrid control-volume �nite-element method and it retains the local continuity of the ux at the control volume faces. A least squares function recon- struction technique together with a new ux decomposition strategy is used to obtain an accurate ux approximation at the control volume face, ensuring that the overall accuracy of the spatial discretisation maintains second order. This paper highlights that the new technique coincides with the traditional shape function technique when the correction term is neglected and that it signi�cantly increases the accuracy of the previous linear scheme on coarse meshes when applied to media that exhibit very strong to extreme anisotropy ratios. It is concluded that the method can be used on both regular and irregular meshes, and appears independent of the mesh quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality data sets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares Regression and Bayesian Weighted Least Squares Regression for the estimation of uncertainty associated with pollutant build-up prediction using limited data sets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in the prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The weighted-least-squares method using sensitivity-analysis technique is proposed for the estimation of parameters in water-distribution systems. The parameters considered are the Hazen-Williams coefficients for the pipes. The objective function used is the sum of the weighted squares of the differences between the computed and the observed values of the variables. The weighted-least-squares method can elegantly handle multiple loading conditions with mixed types of measurements such as heads and consumptions, different sets and number of measurements for each loading condition, and modifications in the network configuration due to inclusion or exclusion of some pipes affected by valve operations in each loading condition. Uncertainty in parameter estimates can also be obtained. The method is applied for the estimation of parameters in a metropolitan urban water-distribution system in India.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a novel approach to solve the ordinal regression problem using Gaussian processes. The proposed approach, probabilistic least squares ordinal regression (PLSOR), obtains the probability distribution over ordinal labels using a particular likelihood function. It performs model selection (hyperparameter optimization) using the leave-one-out cross-validation (LOO-CV) technique. PLSOR has conceptual simplicity and ease of implementation of least squares approach. Unlike the existing Gaussian process ordinal regression (GPOR) approaches, PLSOR does not use any approximation techniques for inference. We compare the proposed approach with the state-of-the-art GPOR approaches on some synthetic and benchmark data sets. Experimental results show the competitiveness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The l1-norm sparsity constraint is a widely used technique for constructing sparse models. In this contribution, two zero-attracting recursive least squares algorithms, referred to as ZA-RLS-I and ZA-RLS-II, are derived by employing the l1-norm of parameter vector constraint to facilitate the model sparsity. In order to achieve a closed-form solution, the l1-norm of the parameter vector is approximated by an adaptively weighted l2-norm, in which the weighting factors are set as the inversion of the associated l1-norm of parameter estimates that are readily available in the adaptive learning environment. ZA-RLS-II is computationally more efficient than ZA-RLS-I by exploiting the known results from linear algebra as well as the sparsity of the system. The proposed algorithms are proven to converge, and adaptive sparse channel estimation is used to demonstrate the effectiveness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The representation of interfaces by means of the algebraic moving-least-squares (AMLS) technique is addressed. This technique, in which the interface is represented by an unconnected set of points, is interesting for evolving fluid interfaces since there is]to surface connectivity. The position of the surface points can thus be updated without concerns about the quality of any surface triangulation. We introduce a novel AMLS technique especially designed for evolving-interfaces applications that we denote RAMLS (for Robust AMLS). The main advantages with respect to previous AMLS techniques are: increased robustness, computational efficiency, and being free of user-tuned parameters. Further, we propose a new front-tracking method based on the Lagrangian advection of the unconnected point set that defines the RAMLS surface. We assume that a background Eulerian grid is defined with some grid spacing h. The advection of the point set makes the surface evolve in time. The point cloud can be regenerated at any time (in particular, we regenerate it each time step) by intersecting the gridlines with the evolved surface, which guarantees that the density of points on the surface is always well balanced. The intersection algorithm is essentially a ray-tracing algorithm, well-studied in computer graphics, in which a line (ray) is traced so as to detect all intersections with a surface. Also, the tracing of each gridline is independent and can thus be performed in parallel. Several tests are reported assessing first the accuracy of the proposed RAMLS technique, and then of the front-tracking method based on it. Comparison with previous Eulerian, Lagrangian and hybrid techniques encourage further development of the proposed method for fluid mechanics applications. (C) 2008 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, a speech recognition algorithm based on the least-squares method is presented. Particularly, the intention is to exemplify how such a traditional numerical technique can be applied to solve a signal processing problem that is usually treated by using more elaborated formulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the paper is to present a new global optimization method for determining all the optima of the Least Squares Method (LSM) problem of pairwise comparison matrices. Such matrices are used, e.g., in the Analytic Hierarchy Process (AHP). Unlike some other distance minimizing methods, LSM is usually hard to solve because of the corresponding nonlinear and non-convex objective function. It is found that the optimization problem can be reduced to solve a system of polynomial equations. Homotopy method is applied which is an efficient technique for solving nonlinear systems. The paper ends by two numerical example having multiple global and local minima.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.