918 resultados para Least squares method
Resumo:
Levels of lignin and hydroxycinnamic acid wall components in three genera of forage grasses (Lolium,Festuca and Dactylis) have been accurately predicted by Fourier-transform infrared spectroscopy using partial least squares models correlated to analytical measurements. Different models were derived that predicted the concentrations of acid detergent lignin, total hydroxycinnamic acids, total ferulate monomers plus dimers, p-coumarate and ferulate dimers in independent spectral test data from methanol extracted samples of perennial forage grass with accuracies of 92.8%, 86.5%, 86.1%, 59.7% and 84.7% respectively, and analysis of model projection scores showed that the models relied generally on spectral features that are known absorptions of these compounds. Acid detergent lignin was predicted in samples of two species of energy grass, (Phalaris arundinacea and Pancium virgatum) with an accuracy of 84.5%.
Resumo:
* Supported by the Army Research Office under grant DAAD-19-02-10059.
Resumo:
2000 Mathematics Subject Classification: Primary: 62M10, 62J02, 62F12, 62M05, 62P05, 62P10; secondary: 60G46, 60F15.
Resumo:
2000 Mathematics Subject Classification: 65C05
Resumo:
2010 Mathematics Subject Classification: 62P15.
Resumo:
The Analytic Hierarchy Process (AHP) is one of the most popular methods used in Multi-Attribute Decision Making. It provides with ratio-scale measurements of the prioirities of elements on the various leveles of a hierarchy. These priorities are obtained through the pairwise comparisons of elements on one level with reference to each element on the immediate higher level. The Eigenvector Method (EM) and some distance minimizing methods such as the Least Squares Method (LSM), Logarithmic Least Squares Method (LLSM), Weighted Least Squares Method (WLSM) and Chi Squares Method (X2M) are of the tools for computing the priorities of the alternatives. This paper studies a method for generating all the solutions of the LSM problems for 3 × 3 matrices. We observe non-uniqueness and rank reversals by presenting numerical results.
Resumo:
Los turistas urbanos se caracterizan por ser uno de los segmentos de mayor crecimiento en los mercados turísticos actuales. Monterrey (México), uno de los principales destinos urbanos del país, ha apostado en la actualidad por mejorar su competitividad. Esta investigación se propuso encontrar evidencia acerca de la relación causal de la motivación de viaje sobre la imagen percibida del destino, dos variables importantes por su influencia en la satisfacción de los visitantes. Una revisión de la literatura permitió proponer constructos teóricos integrados en un instrumento para la recogida de datos vía encuesta a una muestra representativa. Por medio del método de regresión y ecuaciones estructurales por mínimos cuadrados parciales (PLS), se identificaron los componentes principales de ambas variables y se obtuvo un modelo explicativo de la imagen percibida del destino en función de la motivación de viaje. Finalmente, se emiten recomendaciones para la gestión del destino urbano en función de los resultados obtenidos. ABSTRACT: Abstract Urban tourists are recognized as one of the fastest growing segments in today’s tourism markets. Monterrey, Mexico, one of the main urban destinations in the country aims at improving its competitiveness. This research work had the purpose of finding evidence on the causal relationship between travel motivation and destination image, two important variables because of their influence on visitors’ satisfaction. A literature review enabled the proposal of a research instrument with theoretically based constructs to gather data through survey from a representative sample. Using regression and structural equations modelling by partial least squares (pls) a set of main components of both variables were identified thus enabling the obtention of a explanatory model of destination image in terms of travel motivations. Finally based on the results some recommendations of tourism management are given.
Resumo:
A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of- reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced- assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.
Resumo:
Two novelties are introduced: (i) a finite-strain semi-implicit integration algorithm compatible with current element technologies and (ii) the application to assumed-strain hexahedra. The Löwdin algo- rithm is adopted to obtain evolving frames applicable to finite strain anisotropy and a weighted least- squares algorithm is used to determine the mixed strain. Löwdin frames are very convenient to model anisotropic materials. Weighted least-squares circumvent the use of internal degrees-of-freedom. Het- erogeneity of element technologies introduce apparently incompatible constitutive requirements. Assumed-strain and enhanced strain elements can be either formulated in terms of the deformation gradient or the Green–Lagrange strain, many of the high-performance shell formulations are corotational and constitutive constraints (such as incompressibility, plane stress and zero normal stress in shells) also depend on specific element formulations. We propose a unified integration algorithm compatible with possibly all element technologies. To assess its validity, a least-squares based hexahedral element is implemented and tested in depth. Basic linear problems as well as 5 finite-strain examples are inspected for correctness and competitive accuracy.
Resumo:
This paper compares the performance of the complex nonlinear least squares algorithm implemented in the LEVM/LEVMW software with the performance of a genetic algorithm in the characterization of an electrical impedance of known topology. The effect of the number of measured frequency points and of measurement uncertainty on the estimation of circuit parameters is presented. The analysis is performed on the equivalent circuit impedance of a humidity sensor.
Resumo:
Context. We present spectroscopic ground-based observations of the early Be star HD 49330 obtained simultaneously with the CoRoT-LRA1 run just before the burst observed in the CoRoT data. Aims. Ground-based spectroscopic observations of the early Be star HD 49330 obtained during the precursor phase and just before the start of an outburst allow us to disantangle stellar and circumstellar contributions and identify modes of stellar pulsations in this rapidly rotating star. Methods. Time series analysis (TSA) is performed on photospheric line profiles of He I and Si III by means of the least squares method. Results. We find two main frequencies f1 = 11.86 c d(-1) and f2 = 16.89 c d(-1) which can be associated with high order p-mode pulsations. We also detect a frequency f3 = 1.51 c d(-1) which can be associated with a low order g-mode. Moreover we show that the stellar line profile variability changed over the spectroscopic run. These results are in agreement with the results of the CoRoT data analysis, as shown in Huat et al. (2009). Conclusions. Our study of mid-and short-term spectroscopic variability allows the identification of p-and g-modes in HD 49330. It also allows us to display changes in the line profile variability before the start of an outburst. This brings new constraints for the seimic modelling of this star.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries, in particular, from explosions. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel for its simplicity and sufficiency for practical engineering design problems. The code uses a finite-volume formulation of the unsteady Euler equations with a second order explicit Runge-Kutta Godonov (MUSCL) scheme. Gradients are calculated using a least-squares method with a minmod limiter. Flux solvers used are AUSM, AUSMDV and EFM. No fluid-structure coupling or chemical reactions are allowed, but gas models can be perfect gas and JWL or JWLB for the explosive products. This report also describes the code’s ‘octree’ mesh adaptive capability and point-inclusion query procedures for the VCE geometry engine. Finally, some space will also be devoted to describing code parallelization using the shared-memory OpenMP paradigm. The user manual to the code is to be found in the companion report 2007/13.
Resumo:
O modelo matemático de um sistema real permite o conhecimento do seu comportamento dinâmico e é geralmente utilizado em problemas de engenharia. Por vezes os parâmetros utilizados pelo modelo são desconhecidos ou imprecisos. O envelhecimento e o desgaste do material são fatores a ter em conta pois podem causar alterações no comportamento do sistema real, podendo ser necessário efetuar uma nova estimação dos seus parâmetros. Para resolver este problema é utilizado o software desenvolvido pela empresa MathWorks, nomeadamente, o Matlab e o Simulink, em conjunto com a plataforma Arduíno cujo Hardware é open-source. A partir de dados obtidos do sistema real será aplicado um Ajuste de curvas (Curve Fitting) pelo Método dos Mínimos Quadrados de forma a aproximar o modelo simulado ao modelo do sistema real. O sistema desenvolvido permite a obtenção de novos valores dos parâmetros, de uma forma simples e eficaz, com vista a uma melhor aproximação do sistema real em estudo. A solução encontrada é validada com recurso a diferentes sinais de entrada aplicados ao sistema e os seus resultados comparados com os resultados do novo modelo obtido. O desempenho da solução encontrada é avaliado através do método das somas quadráticas dos erros entre resultados obtidos através de simulação e resultados obtidos experimentalmente do sistema real.
Resumo:
To determine whether the slope of a maximal bronchial challenge test (in which FEV1 falls by over 50%) could be extrapolated from a standard bronchial challenge test (in which FEV1 falls up to 20%), 14 asthmatic children performed a single maximal bronchial challenge test with methacholin(dose range: 0.097–30.08 umol) by the dosimeter method. Maximal dose-response curves were included according to the following criteria: (1) at least one more dose beyond a FEV1 ù 20%; and (2) a MFEV1 ù 50%. PD20 FEV1 was calculated, and the slopes of the early part of the dose-response curve (standard dose-response slopes) and of the entire curve (maximal dose-response slopes) were calculated by two methods: the two-point slope (DRR) and the least squares method (LSS) in % FEV1 × umol−1. Maximal dose-response slopes were compared with the corresponding standard dose-response slopes by a paired Student’s t test after logarithmic transformation of the data; the goodness of fit of the LSS was also determined. Maximal dose-response slopes were significantly different (p < 0.0001) from those calculated on the early part of the curve: DRR20% (91.2 ± 2.7 FEV1% z umol−1)was 2.88 times higher than DRR50% (31.6 ± 3.4 DFEV1% z umol−1), and the LSS20% (89.1 ± 2.8% FEV1 z umol−1) was 3.10 times higher than LSS 50% (28.8 ± 1.5%FEV1 z umol−1). The goodness of fit of LSS 50% was significant in all cases, whereas LSS 20% failed to be significant in one. These results suggest that maximal dose-response slopes cannot be predicted from the data of standard bronchial challenge tests.
Resumo:
Geographic information systems give us the possibility to analyze, produce, and edit geographic information. Furthermore, these systems fall short on the analysis and support of complex spatial problems. Therefore, when a spatial problem, like land use management, requires a multi-criteria perspective, multi-criteria decision analysis is placed into spatial decision support systems. The analytic hierarchy process is one of many multi-criteria decision analysis methods that can be used to support these complex problems. Using its capabilities we try to develop a spatial decision support system, to help land use management. Land use management can undertake a broad spectrum of spatial decision problems. The developed decision support system had to accept as input, various formats and types of data, raster or vector format, and the vector could be polygon line or point type. The support system was designed to perform its analysis for the Zambezi river Valley in Mozambique, the study area. The possible solutions for the emerging problems had to cover the entire region. This required the system to process large sets of data, and constantly adjust to new problems’ needs. The developed decision support system, is able to process thousands of alternatives using the analytical hierarchy process, and produce an output suitability map for the problems faced.