940 resultados para LEAST-SQUARES METHODS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inverse diffraction consists in determining the field distribution on a boundary surface from the knowledge of the distribution on a surface situated within the domain where the wave propagates. This problem is a good example for illustrating the use of least-squares methods (also called regularization methods) for solving linear ill-posed inverse problem. We focus on obtaining error bounds For regularized solutions and show that the stability of the restored field far from the boundary surface is quite satisfactory: the error is proportional to ∊(ðŗ‚ ≃ 1) ,ðŗœ being the error in the data (Hölder continuity). However, the error in the restored field on the boundary surface is only proportional to an inverse power of │In∊│ (logarithmic continuity). Such a poor continuity implies some limitations on the resolution which is achievable in practice. In this case, the resolution limit is seen to be about half of the wavelength. Copyright © 1981 by The Institute of Electrical and Electronics Engineers, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effects of sire breed-grazing system and environmental factors on the first activities of high grade Nellore and crossbred Canchim x Nellore, Angus x Nellore, and Simmental x Nellore calves raised in intensive production systems and high grade Nellore calves raised in an extensive production system, after birth, were studied. During 2 years, 185 calves were observed from birth until the end of first suckling, and the following variables were estimated: duration of maternal attention (cow to calf) during the first 15 min after calving, latency to first attempt to stand up, latency to stand up, latency to first suckling, duration of first suckling and the interval from standing to suckling. Data were analyzed by least squares methods, with models that included fixed effects of year and time of the year of birth (March-April (early autumn) and May-June (late autumn)), sire breed-grazing system (Sy), sex of calf (Se), category of cow (primiparous and pluriparous), time of birth, Sy x Se, year x Sy and year x time of the year interactions and the covariates weight of calf, rainfall, air temperature and relative humidity in the day of birth. Calves born from 6:00 to 8:00 h presented the longest latencies to first stand up (40.3 +/- 5.1 min) and the shortest occur from 14:00 to 16:00 h (15.8 +/- 2.7 min) (P < 0.01). Primiparous cows provided longer attention toward the calf in the first 15 min after birth than pluriparous cows (13.0 +/- 0.7 min versus 11.1 +/- 0.5 min; P < 0.05). This attention was also shorter in earlier autumn (11.0 +/- 0.5 min) and longer in late autumn (13.1 +/- 0.8 min) (P < 0.05). Relative to sire breed-grazing system, Nellore calves raised intensively did take longer to stand and to suckle after birth as compared to crossbred calves also raised intensively (P < 0.01). However, grazing system did not affect (P > 0.05) any behaviour variable studied. As regard to sex differences, female calves did take less (P < 0.01) time to suckle after standing than male calves. Results showed that even purebred or crossbred Bos indicus calves in subtropical environmental need extra care when born on rainy days, especially during the first hours of the day. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aerodynamic balances are employed in wind tunnels to estimate the forces and moments acting on the model under test. This paper proposes a methodology for the assessment of uncertainty in the calibration of an internal multi-component aerodynamic balance. In order to obtain a suitable model to provide aerodynamic loads from the balance sensor responses, a calibration is performed prior to the tests by applying known weights to the balance. A multivariate polynomial fitting by the least squares method is used to interpolate the calibration data points. The uncertainties of both the applied loads and the readings of the sensors are considered in the regression. The data reduction includes the estimation of the calibration coefficients, the predicted values of the load components and their corresponding uncertainties, as well as the goodness of fit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this paper is to show a methodology to estimate transmission line parameters. The method is applied in a single-phase transmission line using the method of least squares. In this method the longitudinal and transversal parameters of the line are obtained as a function of a set of measurements of currents and voltages (as well as their derivatives with respect to time) at the terminals of the line during the occurrence of a short-circuit phase-ground near the load. The method is based on the assumption that a transmission line can be represented by a single circuit π. The results show that the precision of the method depends on the length of the line, where it has a better performance for short lines and medium length. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT: We present here a methodology for the rapid interpretation of aeromagnetic data in three dimensions. An estimation of the x, y and z coordinates of prismatic elements is obtained through the application of "Euler's Homogeneous equation" to the data. In this application, it is necessary to have only the total magnetic field and its derivatives. These components can be measured or calculated from the total field data. In the use of Euler's Homogeneous equation, the structural index, the coordinates of the corners of the prism and the depth to the top of the prism are unknown vectors. Inversion of the data by classical least-squares methods renders the problem ill-conditioned. However, the inverse problem can be stabilized by the introduction of both a priori information within the parameter vector together with a weighting matrix. The algorithm was tested with synthetic and real data in a low magnetic latitude region and the results were satisfactory. The applicability of the theorem and its ambiguity caused by the lack of information about the direction of total magnetization, inherent in all automatic methods, is also discussed. As an application, an area within the Solimões basin was chosen to test the method. Since 1977, the Solimões basin has become a center of exploration activity, motivated by the first discovery of gas bearing sandstones within the Monte Alegre formation. Since then, seismic investigations and drilling have been carried on in the region. A knowledge of basement structures is of great importance in the location of oil traps and understanding the tectonic history of this region. Through the application of this method a preliminary estimate of the areal distribution and depth of interbasement and sedimentary magnetic sources was obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A cikk a páros összehasonlításokon alapuló pontozási eljárásokat alkalmazza svájci rendszerű sakk csapatversenyek eredményének meghatározására. Bemutatjuk a nem körmérkőzéses esetben felmerülő kérdéseket, az egyéni és csapatversenyek jellemzőit, valamint a hivatalos lexikografikus rendezések hibáit. Axiomatikus alapokon rangsorolási problémaként modellezzük a bajnokságokat, definícióinkat összekapcsoljuk a pontszám, az általánosított sorösszeg és a legkisebb négyzetek módszerének tulajdonságaival. A javasolt eljárást két sakkcsapat Európa-bajnokság részletes elemzésével illusztráljuk. A végső rangsorok összehasonlítását távolságfüggvények segítségével végezzük el, majd a sokdimenziós skálázás révén ábrázoljuk azokat. A hivatalos sorrendtől való eltérés okait a legkisebb négyzetek módszerének dekompozíciójával tárjuk fel. A sorrendeket három szempont, az előrejelző képesség, a mintailleszkedés és a robusztusság alapján értékeljük, és a legkisebb négyzetek módszerének alkalmas eredménymátrixszal történő használata mellett érvelünk. ____ The paper uses paired comparison-based scoring procedures in order to determine the result of Swiss system chess team tournaments. We present the main challenges of ranking in these tournaments, the features of individual and team competitions as well as the failures of official lexicographical orders. The tournament is represented as a ranking problem, our model is discussed with respect to the properties of the score, generalised row sum and least squares methods. The proposed method is illustrated with a detailed analysis of the two recent chess team European championships. Final rankings are compared through their distances and visualized by multidimensional scaling (MDS). Differences to official ranking are revealed due to the decomposition of least squares method. Rankings are evaluated by prediction accuracy, retrodictive performance, and stability. The paper argues for the use of least squares method with an appropriate generalised results matrix favouring match points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper uses paired comparison-based scoring procedures for ranking the participants of a Swiss system chess team tournament. We present the main challenges of ranking in Swiss system, the features of individual and team competitions as well as the failures of official lexicographical orders. The tournament is represented as a ranking problem, our model is discussed with respect to the properties of the score, generalized row sum and least squares methods. The proposed procedure is illustrated with a detailed analysis of the two recent chess team European championships. Final rankings are compared by their distances and visualized with multidimensional scaling (MDS). Differences to official ranking are revealed by the decomposition of least squares method. Rankings are evaluated by prediction accuracy, retrodictive performance, and stability. The paper argues for the use of least squares method with a results matrix favoring match points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.