900 resultados para weighted least squares


Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acacia and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. This research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials. © 2015, The Author(s).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los turistas urbanos se caracterizan por ser uno de los segmentos de mayor crecimiento en los mercados turísticos actuales. Monterrey (México), uno de los principales destinos urbanos del país, ha apostado en la actualidad por mejorar su competitividad. Esta investigación se propuso encontrar evidencia acerca de la relación causal de la motivación de viaje sobre la imagen percibida del destino, dos variables importantes por su influencia en la satisfacción de los visitantes. Una revisión de la literatura permitió proponer constructos teóricos integrados en un instrumento para la recogida de datos vía encuesta a una muestra representativa. Por medio del método de regresión y ecuaciones estructurales por mínimos cuadrados parciales (PLS), se identificaron los componentes principales de ambas variables y se obtuvo un modelo explicativo de la imagen percibida del destino en función de la motivación de viaje. Finalmente, se emiten recomendaciones para la gestión del destino urbano en función de los resultados obtenidos. ABSTRACT: Abstract Urban tourists are recognized as one of the fastest growing segments in today’s tourism markets. Monterrey, Mexico, one of the main urban destinations in the country aims at improving its competitiveness. This research work had the purpose of finding evidence on the causal relationship between travel motivation and destination image, two important variables because of their influence on visitors’ satisfaction. A literature review enabled the proposal of a research instrument with theoretically based constructs to gather data through survey from a representative sample. Using regression and structural equations modelling by partial least squares (pls) a set of main components of both variables were identified thus enabling the obtention of a explanatory model of destination image in terms of travel motivations. Finally based on the results some recommendations of tourism management are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of-reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced-assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A finite-strain solid–shell element is proposed. It is based on least-squares in-plane assumed strains, assumed natural transverse shear and normal strains. The singular value decomposition (SVD) is used to define local (integration-point) orthogonal frames-of- reference solely from the Jacobian matrix. The complete finite-strain formulation is derived and tested. Assumed strains obtained from least-squares fitting are an alternative to the enhanced-assumed-strain (EAS) formulations and, in contrast with these, the result is an element satisfying the Patch test. There are no additional degrees-of-freedom, as it is the case with the enhanced- assumed-strain case, even by means of static condensation. Least-squares fitting produces invariant finite strain elements which are shear-locking free and amenable to be incorporated in large-scale codes. With that goal, we use automatically generated code produced by AceGen and Mathematica. All benchmarks show excellent results, similar to the best available shell and hybrid solid elements with significantly lower computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares the performance of the complex nonlinear least squares algorithm implemented in the LEVM/LEVMW software with the performance of a genetic algorithm in the characterization of an electrical impedance of known topology. The effect of the number of measured frequency points and of measurement uncertainty on the estimation of circuit parameters is presented. The analysis is performed on the equivalent circuit impedance of a humidity sensor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to knowledge gaps in relation to urban stormwater quality processes, an in-depth understanding of model uncertainty can enhance decision making. Uncertainty in stormwater quality models can originate from a range of sources such as the complexity of urban rainfall-runoff-stormwater pollutant processes and the paucity of observed data. Unfortunately, studies relating to epistemic uncertainty, which arises from the simplification of reality are limited and often deemed mostly unquantifiable. This paper presents a statistical modelling framework for ascertaining epistemic uncertainty associated with pollutant wash-off under a regression modelling paradigm using Ordinary Least Squares Regression (OLSR) and Weighted Least Squares Regression (WLSR) methods with a Bayesian/Gibbs sampling statistical approach. The study results confirmed that WLSR assuming probability distributed data provides more realistic uncertainty estimates of the observed and predicted wash-off values compared to OLSR modelling. It was also noted that the Bayesian/Gibbs sampling approach is superior compared to the most commonly adopted classical statistical and deterministic approaches commonly used in water quality modelling. The study outcomes confirmed that the predication error associated with wash-off replication is relatively higher due to limited data availability. The uncertainty analysis also highlighted the variability of the wash-off modelling coefficient k as a function of complex physical processes, which is primarily influenced by surface characteristics and rainfall intensity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The weighted-least-squares method based on the Gauss-Newton minimization technique is used for parameter estimation in water distribution networks. The parameters considered are: element resistances (single and/or group resistances, Hazen-Williams coefficients, pump specifications) and consumptions (for single or multiple loading conditions). The measurements considered are: nodal pressure heads, pipe flows, head loss in pipes, and consumptions/inflows. An important feature of the study is a detailed consideration of the influence of different choice of weights on parameter estimation, for error-free data, noisy data, and noisy data which include bad data. The method is applied to three different networks including a real-life problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The maintenance of chlorine residual is needed at all the points in the distribution system supplied with chlorine as a disinfectant. The propagation and level of chlorine in a distribution system is affected by both bulk and pipe wall reactions. It is well known that the field determination of wall reaction parameter is difficult. The source strength of chlorine to maintain a specified chlorine residual at a target node is also an important parameter. The inverse model presented in the paper determines these water quality parameters, which are associated with different reaction kinetics, either in single or in groups of pipes. The weighted-least-squares method based on the Gauss-Newton minimization technique is used for the estimation of these parameters. The validation and application of the inverse model is illustrated with an example pipe distribution system under steady state. A generalized procedure to handle noisy and bad (abnormal) data is suggested, which can be used to estimate these parameters more accurately. The developed inverse model is useful for water supply agencies to calibrate their water distribution system and to improve their operational strategies to maintain water quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ENGLISH: Monthly estimates of the abundance of yellowfin tuna by age groups and regions within the eastern Pacific Ocean during 1970-1988 are made, using purse-seine catch rates, length-frequency samples, and results from cohort analysis. The numbers of individuals caught of each age group in each logged purse-seine set are estimated, using the tonnage from that set and length-frequency distribution from the "nearest" length-frequency sample(s). Nearest refers to the closest length frequency sample(s) to the purse-seine set in time, distance, and set type (dolphin associated, floating object associated, skipjack associated, none of these, and some combinations). Catch rates are initially calculated as the estimated number of individuals of the age group caught per hour of searching. Then, to remove the effects of set type and vessel speed, they are standardized, using separate weiznted generalized linear models for each age group. The standardized catch rates at the center of each 2.5 0 quadrangle-month are estimated, using locally-weighted least-squares regressions on latitude, longitude and date, and then combined into larger regions. Catch rates within these regions are converted to numbers of yellowfin, using the mean age composition from cohort analysis. The variances of the abundance estimates within regions are large for 0-, 1-, and 5-year-olds, but small for 1.5- to 4-year-olds, except during periods of low fishing activity. Mean annual catch rate estimates for the entire eastern Pacific Ocean are significantly positively correlated with mean abundance estimates from cohort analysis for age groups ranging from 1.5 to 4 years old. Catch-rate indices of abundance by age are expected to be useful in conjunction with data on reproductive biology to estimate total egg production within regions. The estimates may also be useful in understanding geographic and temporal variations in age-specific availability to purse seiners, as well as age-specific movements. SPANISH: Se calculan estimaciones mensuales de la abundancia del atún aleta amarilla por grupos de edad y regiones en el Océano Pacífico oriental durante 1970-1988, usando tasas de captura cerquera, muestras de frecuencia de talla, y los resultados del análisis de cohortes. Se estima el número de individuos capturados de cada grupo de edad en cada lance cerquero registrado, usando el tonelaje del lance en cuestión y la distribución de frecuencia de talla de la(s) muestra(s) de frecuencia de talla "más cercana/s)," "Más cercana" significa la(s) muestra(s) de frecuencia de talla más parecida(s) al lance cerquero en cuanto a fecha, distancia, y tipo de lance (asociado con delfines, con objeto flotante, con barrilete, con ninguno de éstos, y algunas combinaciones). Se calculan inicialmente las tasas de captura como el número estimado de individuos del grupo de edad capturado por hora de búsqueda. A continuación, para eliminar los efectos del tipo de lance y la velocidad del barco, se estandardizan dichas tasas, usando un modelo lineal generalizado ponderado, para cada grupo por separado. Se estima la tasa de captura estandardizada al centro de cada cuadrángulo de 2.5°-mes, usando regresiones de mínimos cuadrados ponderados localmente por latitud, longitud, y fecha, y entonces combinándolas en regiones mayores. Se convierten las tasas de captura dentro de estas regiones en números de aletas amarillas individuales, usando el número promedio por edad proveniente del análisis de cohortes. Las varianzas de las estimaciones de la abundancia dentro de las regiones son grandes para los peces de O, 1, Y5 años de edad, pero pequeñas para aquellos de entre 1.5 Y4 años de edad, excepto durante períodos de poca actividad pesquera. Las estimaciones de la tasa de captura media anual para todo el Océano Pacífico oriental están correlacionadas positivamente de forma significativa con las estimaciones de la abundancia media del análisis de las cohortes para los grupos de edad de entre 1.5 y 4 años. Se espera que los índices de abundancia por edad basados en las tasas de captura sean útiles, en conjunto con datos de la biología reproductiva, para estimar la producción total de huevos por regiones. Las estimaciones podrían asimismo ser útiles para la comprensión de las variaciones geográficas y temporales de la disponibilidad específica por edad a los barcos cerqueros, y también las migraciones específicas por edad. (PDF contains 35 pages.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho apresenta um estudo teórico e numérico sobre os erros que ocorrem nos cálculos de gradientes em malhas não estruturadas constituídas pelo diagrama de Voronoi, malhas estas, formadas também pela triangulação de Delaunay. As malhas adotadas, no trabalho, foram as malhas cartesianas e as malhas triangulares, esta última é gerada pela divisão de um quadrado em dois ou quatro triângulos iguais. Para tal análise, adotamos a escolha de três metodologias distintas para o cálculo dos gradientes: método de Green Gauss, método do Mínimo Resíduo Quadrático e método da Média do Gradiente Projetado Corrigido. O texto se baseia em dois enfoques principais: mostrar que as equações de erros dadas pelos gradientes podem ser semelhantes, porém com sinais opostos, para pontos de cálculos em volumes vizinhos e que a ordem do erro das equações analíticas pode ser melhorada em malhas uniformes quando comparada as não uniformes, nos casos unidimensionais, e quando analisada na face de tais volumes vizinhos nos casos bidimensionais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A discriminação de fases que são praticamente indistinguíveis ao microscópio ótico de luz refletida ou ao microscópio eletrônico de varredura (MEV) é um dos problemas clássicos da microscopia de minérios. Com o objetivo de resolver este problema vem sendo recentemente empregada a técnica de microscopia colocalizada, que consiste na junção de duas modalidades de microscopia, microscopia ótica e microscopia eletrônica de varredura. O objetivo da técnica é fornecer uma imagem de microscopia multimodal, tornando possível a identificação, em amostras de minerais, de fases que não seriam distinguíveis com o uso de uma única modalidade, superando assim as limitações individuais dos dois sistemas. O método de registro até então disponível na literatura para a fusão das imagens de microscopia ótica e de microscopia eletrônica de varredura é um procedimento trabalhoso e extremamente dependente da interação do operador, uma vez que envolve a calibração do sistema com uma malha padrão a cada rotina de aquisição de imagens. Por esse motivo a técnica existente não é prática. Este trabalho propõe uma metodologia para automatizar o processo de registro de imagens de microscopia ótica e de microscopia eletrônica de varredura de maneira a aperfeiçoar e simplificar o uso da técnica de microscopia colocalizada. O método proposto pode ser subdividido em dois procedimentos: obtenção da transformação e registro das imagens com uso desta transformação. A obtenção da transformação envolve, primeiramente, o pré-processamento dos pares de forma a executar um registro grosseiro entre as imagens de cada par. Em seguida, são obtidos pontos homólogos, nas imagens óticas e de MEV. Para tal, foram utilizados dois métodos, o primeiro desenvolvido com base no algoritmo SIFT e o segundo definido a partir da varredura pelo máximo valor do coeficiente de correlação. Na etapa seguinte é calculada a transformação. Foram empregadas duas abordagens distintas: a média ponderada local (LWM) e os mínimos quadrados ponderados com polinômios ortogonais (MQPPO). O LWM recebe como entradas os chamados pseudo-homólogos, pontos que são forçadamente distribuídos de forma regular na imagem de referência, e que revelam, na imagem a ser registrada, os deslocamentos locais relativos entre as imagens. Tais pseudo-homólogos podem ser obtidos tanto pelo SIFT como pelo método do coeficiente de correlação. Por outro lado, o MQPPO recebe um conjunto de pontos com a distribuição natural. A análise dos registro de imagens obtidos empregou como métrica o valor da correlação entre as imagens obtidas. Observou-se que com o uso das variantes propostas SIFT-LWM e SIFT-Correlação foram obtidos resultados ligeiramente superiores aos do método com a malha padrão e LWM. Assim, a proposta, além de reduzir drasticamente a intervenção do operador, ainda possibilitou resultados mais precisos. Por outro lado, o método baseado na transformação fornecida pelos mínimos quadrados ponderados com polinômios ortogonais mostrou resultados inferiores aos produzidos pelo método que faz uso da malha padrão.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate knowledge of traffic demands in a communication network enables or enhances a variety of traffic engineering and network management tasks of paramount importance for operational networks. Directly measuring a complete set of these demands is prohibitively expensive because of the huge amounts of data that must be collected and the performance impact that such measurements would impose on the regular behavior of the network. As a consequence, we must rely on statistical techniques to produce estimates of actual traffic demands from partial information. The performance of such techniques is however limited due to their reliance on limited information and the high amount of computations they incur, which limits their convergence behavior. In this paper we study strategies to improve the convergence of a powerful statistical technique based on an Expectation-Maximization iterative algorithm. First we analyze modeling approaches to generating starting points. We call these starting points informed priors since they are obtained using actual network information such as packet traces and SNMP link counts. Second we provide a very fast variant of the EM algorithm which extends its computation range, increasing its accuracy and decreasing its dependence on the quality of the starting point. Finally, we study the convergence characteristics of our EM algorithm and compare it against a recently proposed Weighted Least Squares approach.