894 resultados para Least mean squares methods
Resumo:
A simple method was proposed for determination of paracetamol and ibuprofen in tablets, based on UV measurements and partial least squares. The procedure was performed at pH 10.5, in the concentration ranges 3.00-15.00 µg ml-1 (paracetamol) and 2.40-12.00 µg ml-1 (ibuprofen). The model was able to predict paracetamol and ibuprofen in synthetic mixtures with root mean squares errors of prediction of 0.12 and 0.17 µg ml-1, respectively. Figures of merit (sensitivity, limit of detection and precision) were also estimated. The results achieved for the determination of these drugs in pharmaceutical formulations were in agreement with label claims and verified by HPLC.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.
Resumo:
Effects of sire breed-grazing system and environmental factors on the first activities of high grade Nellore and crossbred Canchim x Nellore, Angus x Nellore, and Simmental x Nellore calves raised in intensive production systems and high grade Nellore calves raised in an extensive production system, after birth, were studied. During 2 years, 185 calves were observed from birth until the end of first suckling, and the following variables were estimated: duration of maternal attention (cow to calf) during the first 15 min after calving, latency to first attempt to stand up, latency to stand up, latency to first suckling, duration of first suckling and the interval from standing to suckling. Data were analyzed by least squares methods, with models that included fixed effects of year and time of the year of birth (March-April (early autumn) and May-June (late autumn)), sire breed-grazing system (Sy), sex of calf (Se), category of cow (primiparous and pluriparous), time of birth, Sy x Se, year x Sy and year x time of the year interactions and the covariates weight of calf, rainfall, air temperature and relative humidity in the day of birth. Calves born from 6:00 to 8:00 h presented the longest latencies to first stand up (40.3 +/- 5.1 min) and the shortest occur from 14:00 to 16:00 h (15.8 +/- 2.7 min) (P < 0.01). Primiparous cows provided longer attention toward the calf in the first 15 min after birth than pluriparous cows (13.0 +/- 0.7 min versus 11.1 +/- 0.5 min; P < 0.05). This attention was also shorter in earlier autumn (11.0 +/- 0.5 min) and longer in late autumn (13.1 +/- 0.8 min) (P < 0.05). Relative to sire breed-grazing system, Nellore calves raised intensively did take longer to stand and to suckle after birth as compared to crossbred calves also raised intensively (P < 0.01). However, grazing system did not affect (P > 0.05) any behaviour variable studied. As regard to sex differences, female calves did take less (P < 0.01) time to suckle after standing than male calves. Results showed that even purebred or crossbred Bos indicus calves in subtropical environmental need extra care when born on rainy days, especially during the first hours of the day. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
Aerodynamic balances are employed in wind tunnels to estimate the forces and moments acting on the model under test. This paper proposes a methodology for the assessment of uncertainty in the calibration of an internal multi-component aerodynamic balance. In order to obtain a suitable model to provide aerodynamic loads from the balance sensor responses, a calibration is performed prior to the tests by applying known weights to the balance. A multivariate polynomial fitting by the least squares method is used to interpolate the calibration data points. The uncertainties of both the applied loads and the readings of the sensors are considered in the regression. The data reduction includes the estimation of the calibration coefficients, the predicted values of the load components and their corresponding uncertainties, as well as the goodness of fit.
Resumo:
The objective of this paper is to show a methodology to estimate transmission line parameters. The method is applied in a single-phase transmission line using the method of least squares. In this method the longitudinal and transversal parameters of the line are obtained as a function of a set of measurements of currents and voltages (as well as their derivatives with respect to time) at the terminals of the line during the occurrence of a short-circuit phase-ground near the load. The method is based on the assumption that a transmission line can be represented by a single circuit π. The results show that the precision of the method depends on the length of the line, where it has a better performance for short lines and medium length. © 2012 IEEE.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Dois dos principais objetivos da interpretação petrofísica de perfis são a determinação dos limites entre as camadas geológicas e o contato entre fluidos. Para isto, o perfil de indução possui algumas importantes propriedades: É sensível ao tipo de fluido e a distribuição do mesmo no espaço poroso; e o seu registro pode ser modelado com precisão satisfatória como sendo uma convolução entre a condutividade da formação e a função resposta da ferramenta. A primeira propriedade assegura uma boa caracterização dos reservatórios e, ao mesmo tempo, evidencia os contatos entre fluidos, o que permite um zoneamento básico do perfil de poço. A segunda propriedade decorre da relação quasi-linear entre o perfil de indução e a condutividade da formação, o que torna possível o uso da teoria dos sistemas lineares e, particularmente, o desenho de filtros digitais adaptados à deconvolução do sinal original. A idéia neste trabalho é produzir um algoritmo capaz de identificar os contatos entre as camadas atravessadas pelo poço, a partir da condutividade aparente lida pelo perfil de indução. Para simplificar o problema, o modelo de formação assume uma distribuição plano-paralela de camadas homogêneas. Este modelo corresponde a um perfil retangular para condutividade da formação. Usando o perfil de entrada digitalizado, os pontos de inflexão são obtidos numericamente a partir dos extremos da primeira derivada. Isto gera uma primeira aproximação do perfil real da formação. Este perfil estimado é então convolvido com a função resposta da ferramenta gerando um perfil de condutividade aparente. Uma função custo de mínimos quadrados condicionada é definida em termos da diferença entre a condutividade aparente medida e a estimada. A minimização da função custo fornece a condutividade das camadas. O problema de otimização para encontrar o melhor perfil retangular para os dados de indução é linear nas amplitudes (condutividades das camadas), mas uma estimativa não linear para os contatos entre as camadas. Neste caso as amplitudes são estimadas de forma linear pelos mínimos quadrados mantendo-se fixos os contatos. Em um segundo passo mantem-se fixas as amplitudes e são calculadas pequenas mudanças nos limites entre as camadas usando uma aproximação linearizada. Este processo é interativo obtendo sucessivos refinamentos até que um critério de convergência seja satisfeito. O algoritmo é aplicado em dados sintéticos e reais demonstrando a robustez do método.
Resumo:
ABSTRACT: We present here a methodology for the rapid interpretation of aeromagnetic data in three dimensions. An estimation of the x, y and z coordinates of prismatic elements is obtained through the application of "Euler's Homogeneous equation" to the data. In this application, it is necessary to have only the total magnetic field and its derivatives. These components can be measured or calculated from the total field data. In the use of Euler's Homogeneous equation, the structural index, the coordinates of the corners of the prism and the depth to the top of the prism are unknown vectors. Inversion of the data by classical least-squares methods renders the problem ill-conditioned. However, the inverse problem can be stabilized by the introduction of both a priori information within the parameter vector together with a weighting matrix. The algorithm was tested with synthetic and real data in a low magnetic latitude region and the results were satisfactory. The applicability of the theorem and its ambiguity caused by the lack of information about the direction of total magnetization, inherent in all automatic methods, is also discussed. As an application, an area within the Solimões basin was chosen to test the method. Since 1977, the Solimões basin has become a center of exploration activity, motivated by the first discovery of gas bearing sandstones within the Monte Alegre formation. Since then, seismic investigations and drilling have been carried on in the region. A knowledge of basement structures is of great importance in the location of oil traps and understanding the tectonic history of this region. Through the application of this method a preliminary estimate of the areal distribution and depth of interbasement and sedimentary magnetic sources was obtained.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Purpose: To determine the proportion of blindness and investigate the relationships between risk factors based on clinical characteristics and development of blindness in patients with primary open-angle glaucoma (POAG) treated for at least 15 years. Methods: A retrospective observational chart review was performed with 403 patients referred to a tertiary level hospital, each with a diagnosis of primary open-angle glaucoma, treated for at least 15 years. Blindness attributable to glaucoma was defined based on visual acuity and/or visual field tests. Variables considered to be possible risk factors for blindness were evaluated using odds ratio (OR), confidence interval (95% CI), and univariate and multivariate analyses. Results: Thirty-one patients became blind [13/53 (24.5%) - unilaterally and 18/53 (34%) - bilaterally] during the follow-up period of treatment (19.5 +/- 4.6 years, range 15-31 years). Multivariate statistics with regression analysis revealed that persistency on initial therapy <= 6 months was significantly associated with blindness, both unilateral (OR: 8.4; 95% CI: 1.3-56.4) and bilateral (OR: 7.2; 95% CI: 1.3-39.6). Other potential factors such as race, age, gender or number of medications were not associated with blindness. Conclusion: Blindness from primary open-angle glaucoma was not uncommon in this population of treated patients after the long follow-up period proposed. Persistence rates with the first therapy, as measured by a medical decision to change, were low. Persistence <= 6 months was statistically associated with the development of unilateral and bilateral blindness from glaucoma.
Resumo:
A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.
Resumo:
Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.
Resumo:
Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.