893 resultados para principal component regression
Resumo:
Flos Chrysanthemum is a generic name for a particular group of edible plants, which also have medicinal properties. There are, in fact, twenty to thirty different cultivars, which are commonly used in beverages and for medicinal purposes. In this work, four Flos Chrysanthemum cultivars, Hangju, Taiju, Gongju, and Boju, were collected and chromatographic fingerprints were used to distinguish and assess these cultivars for quality control purposes. Chromatography fingerprints contain chemical information but also often have baseline drifts and peak shifts, which complicate data processing, and adaptive iteratively reweighted, penalized least squares, and correlation optimized warping were applied to correct the fingerprint peaks. The adjusted data were submitted to unsupervised and supervised pattern recognition methods. Principal component analysis was used to qualitatively differentiate the Flos Chrysanthemum cultivars. Partial least squares, continuum power regression, and K-nearest neighbors were used to predict the unknown samples. Finally, the elliptic joint confidence region method was used to evaluate the prediction ability of these models. The partial least squares and continuum power regression methods were shown to best represent the experimental results.
Resumo:
A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations
Resumo:
In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms
Resumo:
The goal of this research was to establish the necessary conditions under which individuals are prepared to commit themselves to quality assurance work in the organisation of a Polytechnic. The conditions were studied using four main concepts: awareness of quality, commitment to the organisation, leadership and work welfare. First, individuals were asked to describe these four concepts. Then, relationships between the concepts were analysed in order to establish the conditions for the commitment of an individual towards quality assurance work (QA). The study group comprised the entire personnel of Helsinki Polytechnic, of which 341 (44.5%) individuals participated. Mixed methods were used as the methodological base. A questionnaire and interviews were used as the research methods. The data from the interviews were used for the validation of the results, as well as for completing the analysis. The results of these interviews and analyses were integrated using the concurrent nested design method. In addition, the questionnaire was used to separately analyse the impressions and meanings of the awareness of quality and leadership, because, according to the pre-understanding, impressions of phenomena expressed in terms of reality have an influence on the commitment to QA. In addition to statistical figures, principal component analysis was used as a description method. For comparisons between groups, one way variance analysis and effect size analysis were used. For explaining the analysis methods, forward regression analysis and structural modelling were applied. As a result of the research it was found that 51% of the conditions necessary for a commitment to QA were explained by an individual’s experience/belief that QA was a method of development, that QA was possible to participate in and that the meaning of quality included both product and process qualities. If analysed separately, other main concepts (commitment to the organisation, leadership and work welfare) played only a small part in explaining an individual’s commitment. In the context of this research, a structural path model of the main concepts was built. In the model, the concepts were interconnected by paths created as a result of a literature search covering the main concepts, as well as a result of an analysis of the empirical material of this thesis work. The path model explained 46% of the necessary conditions under which individuals are prepared to commit themselves to QA. The most important path for achieving a commitment stemmed from product and system quality emanating from the new goals of the Polytechnic, moved through the individual’s experience that QA is a method of the total development of quality and ended in a commitment to QA. The second most important path stemmed from the individual’s experience of belonging to a supportive work community, moved through the supportive value of the job and through affective commitment to the organisation and ended in a commitment to QA. The third path stemmed from an individual’s experiences in participating in QA, moved through collective system quality and through these to the supportive value of the job to affective commitment to the organisation and ended in a commitment to QA. The final path in the path model stemmed from leadership by empowerment, moved through collective system quality, the supportive value of the job and an affective commitment to the organisation, and again, ended in a commitment to QA. As a result of the research, it was found that the individual’s functional department was an important factor in explaining the differences between groups. Therefore, it was found that understanding the processing of part cultures in the organisation is important when developing QA. Likewise, learning-teaching paradigms proved to be a differentiating factor. Individuals thinking according to the humanistic-constructivistic paradigm showed more commitment to QA than technological-rational thinkers. Also, it was proved that the QA training program did not increase commitment, as the path model demonstrated that those who participated in training showed 34% commitment, whereas those who did not showed 55% commitment. As a summary of the results it can be said that the necessary conditions under which individuals are prepared to commit themselves to QA cannot be treated in a reductionistic way. Instead, the conditions must be treated as one totality, with all the main concepts interacting simultaneously. Also, the theoretical framework of quality must include its dynamic aspect, which means the development of the work of the individual and learning through auditing. In addition, this dynamism includes the reflection of the paradigm of the functions of the individual as well as that of all parts of the organisation. It is important to understand and manage the various ways of thinking and the cultural differences produced by the fragmentation of the organisation. Finally, it seems possible that the path model can be generalised for use in any organisation development project where the personnel should be committed.
Resumo:
A commercial non-specific gas sensor array system was evaluated in terms of its capability to monitor the odour abatement performance of a biofiltration system developed for treating emissions from a commercial piggery building. The biofiltration system was a modular system comprising an inlet ducting system, humidifier and closed-bed biofilter. It also included a gravimetric moisture monitoring and water application system for precise control of moisture content of an organic woodchip medium. Principal component analysis (PCA) of the sensor array measurements indicated that the biofilter outlet air was significantly different to both inlet air of the system and post-humidifier air. Data pre-processing techniques including normalising and outlier handling were applied to improve the odour discrimination performance of the non-specific gas sensor array. To develop an odour quantification model using the sensor array responses of the non-specific sensor array, PCA regression, artificial neural network (ANN) and partial least squares (PLS) modelling techniques were applied. The correlation coefficient (r(2)) values of the PCA, ANN, and PLS models were 0.44, 0.62 and 0.79, respectively.
Resumo:
Nanotechnology is a new technology which is generating a lot of interest among academicians, practitioners and scientists. Critical research is being carried out in this area all over the world.Governments are creating policy initiatives to promote developments it the nanoscale science and technology developments. Private investment is also seeing a rising trend. Large number of academic institutions and national laboratories has set up research centers that are workingon the multiple applications of nanotechnology. Wide ranges of applications are claimed for nanotechnology. This consists of materials, chemicals, textiles, semiconductors, to wonder drug delivery systems and diagnostics. Nanotechnology is considered to be a next big wave of technology after information technology and biotechnology. In fact, nanotechnology holds the promise of advances that exceed those achieved in recent decades in computers and biotechnology. Much interest in nanotechnology also could be because of the fact that enormous monetary benefits are expected from nanotechnology based products. According to NSF, revenues from nanotechnology could touch $ 1 trillion by 2015. However much of the benefits are projected ones. Realizing claimed benefits require successful development of nanoscience andv nanotechnology research efforts. That is the journey of invention to innovation has to be completed. For this to happen the technology has to flow from laboratory to market. Nanoscience and nanotechnology research efforts have to come out in the form of new products, new processes, and new platforms.India has also started its Nanoscience and Nanotechnology development program in under its 10(th) Five Year Plan and funds worth Rs. One billion have been allocated for Nanoscience and Nanotechnology Research and Development. The aim of the paper is to assess Nanoscience and Nanotechnology initiatives in India. We propose a conceptual model derived from theresource based view of the innovation. We have developed a structured questionnaire to measure the constructs in the conceptual model. Responses have been collected from 115 scientists and engineers working in the field of Nanoscience and Nanotechnology. The responses have been analyzed further by using Principal Component Analysis, Cluster Analysis and Regression Analysis.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Detecting and quantifying the presence of human-induced climate change in regional hydrology is important for studying the impacts of such changes on the water resources systems as well as for reliable future projections and policy making for adaptation. In this article a formal fingerprint-based detection and attribution analysis has been attempted to study the changes in the observed monsoon precipitation and streamflow in the rain-fed Mahanadi River Basin in India, considering the variability across different climate models. This is achieved through the use of observations, several climate model runs, a principal component analysis and regression based statistical downscaling technique, and a Genetic Programming based rainfall-runoff model. It is found that the decreases in observed hydrological variables across the second half of the 20th century lie outside the range that is expected from natural internal variability of climate alone at 95% statistical confidence level, for most of the climate models considered. For several climate models, such changes are consistent with those expected from anthropogenic emissions of greenhouse gases. However, unequivocal attribution to human-induced climate change cannot be claimed across all the climate models and uncertainties in our detection procedure, arising out of various sources including the use of models, cannot be ruled out. Changes in solar irradiance and volcanic activities are considered as other plausible natural external causes of climate change. Time evolution of the anthropogenic climate change ``signal'' in the hydrological observations, above the natural internal climate variability ``noise'' shows that the detection of the signal is achieved earlier in streamflow as compared to precipitation for most of the climate models, suggesting larger impacts of human-induced climate change on streamflow than precipitation at the river basin scale.
Resumo:
The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
Gene microarray technology is highly effective in screening for differential gene expression and has hence become a popular tool in the molecular investigation of cancer. When applied to tumours, molecular characteristics may be correlated with clinical features such as response to chemotherapy. Exploitation of the huge amount of data generated by microarrays is difficult, however, and constitutes a major challenge in the advancement of this methodology. Independent component analysis (ICA), a modern statistical method, allows us to better understand data in such complex and noisy measurement environments. The technique has the potential to significantly increase the quality of the resulting data and improve the biological validity of subsequent analysis. We performed microarray experiments on 31 postmenopausal endometrial biopsies, comprising 11 benign and 20 malignant samples. We compared ICA to the established methods of principal component analysis (PCA), Cyber-T, and SAM. We show that ICA generated patterns that clearly characterized the malignant samples studied, in contrast to PCA. Moreover, ICA improved the biological validity of the genes identified as differentially expressed in endometrial carcinoma, compared to those found by Cyber-T and SAM. In particular, several genes involved in lipid metabolism that are differentially expressed in endometrial carcinoma were only found using this method. This report highlights the potential of ICA in the analysis of microarray data.
Resumo:
Este trabalho de pesquisa descreve três estudos de utilização de métodos quimiométricos para a classificação e caracterização de óleos comestíveis vegetais e seus parâmetros de qualidade através das técnicas de espectrometria de absorção molecular no infravermelho médio com transformada de Fourier e de espectrometria no infravermelho próximo, e o monitoramento da qualidade e estabilidade oxidativa do iogurte usando espectrometria de fluorescência molecular. O primeiro e segundo estudos visam à classificação e caracterização de parâmetros de qualidade de óleos comestíveis vegetais utilizando espectrometria no infravermelho médio com transformada de Fourier (FT-MIR) e no infravermelho próximo (NIR). O algoritmo de Kennard-Stone foi usado para a seleção do conjunto de validação após análise de componentes principais (PCA). A discriminação entre os óleos de canola, girassol, milho e soja foi investigada usando SVM-DA, SIMCA e PLS-DA. A predição dos parâmetros de qualidade, índice de refração e densidade relativa dos óleos, foi investigada usando os métodos de calibração multivariada dos mínimos quadrados parciais (PLS), iPLS e SVM para os dados de FT-MIR e NIR. Vários tipos de pré-processamentos, primeira derivada, correção do sinal multiplicativo (MSC), dados centrados na média, correção do sinal ortogonal (OSC) e variação normal padrão (SNV) foram utilizados, usando a raiz quadrada do erro médio quadrático de validação cruzada (RMSECV) e de predição (RMSEP) como parâmetros de avaliação. A metodologia desenvolvida para determinação de índice de refração e densidade relativa e classificação dos óleos vegetais é rápida e direta. O terceiro estudo visa à avaliação da estabilidade oxidativa e qualidade do iogurte armazenado a 4C submetido à luz direta e mantido no escuro, usando a análise dos fatores paralelos (PARAFAC) na luminescência exibida por três fluoróforos presentes no iogurte, onde pelo menos um deles está fortemente relacionado com as condições de armazenamento. O sinal fluorescente foi identificado pelo espectro de emissão e excitação das substâncias fluorescentes puras, que foram sugeridas serem vitamina A, triptofano e riboflavina. Modelos de regressão baseados nos escores do PARAFAC para a riboflavina foram desenvolvidos usando os escores obtidos no primeiro dia como variável dependente e os escores obtidos durante o armazenamento como variável independente. Foi visível o decaimento da curva analítica com o decurso do tempo da experimentação. Portanto, o teor de riboflavina pode ser considerado um bom indicador para a estabilidade do iogurte. Assim, é possível concluir que a espectroscopia de fluorescência combinada com métodos quimiométricos é um método rápido para monitorar a estabilidade oxidativa e a qualidade do iogurte
Resumo:
O dimorfismo sexual exibido por machos polifênicos em algumas espécies do gênero Ptychoderes envolve variação no rostro, antena e ventritos. A existência de polifenismo pode ser um importante componente no processo evolutivo por meio de novidades morfológicas e comportamentais. O objetivo desse estudo foi determinar a variação em caracteres morfométricos, polifenismo em machos, variação de estruturas com conhecido dimorfismo sexual, possíveis padrões alométricos e testar estas inferências para Ptychoderes através do mapeamento do dimorfismo sexual e de machos em uma reconstrução filogenética de Ptychoderes usando Mesquite 2.04. Foram medidas 23 variáveis morfométricas em 510 espécimes com as seguintes análises realizadas: análises de cluster, analises de componentes principais (PCA), analise de variáveis canônicas (AVC), análise de regressão por eixo maior reduzido (RMA). Cada tipo de dimorfismo foi mapeado em uma filogenia prévia como dois estados separados usando parcimônia. Para todas as espécies o dimorfismo sexual apresentou diferenças significativas entre os sexos com relação aos segmentos antenais (II- X).O compriemtno do rosto e ventrito V foram confirmados como indicativos de dimorfismo sexual (exceto em P. jordani). A única espécie em que não ocorreu machos polifenicos foi P. depressus. Nas outras espécies machos grandes e pequenos diferem significantemente para muitas variáveis com similaridades e diferenças. Na ACP, o primeiro componente (PC1) apresentou alta porcentagem de variância nos dados de todas as espécies; apresentou loadings de mesmo sinal sugerindo diferenças relacionadas ao tamanho para as espécies P. jordani, P. depressus, P. virgatus, P. mixtus e P. callosus e para as espécies P. viridanus, P. antiquus, P. elongates e P. nebulosus apresentou loadings positivos e negativos sugerindo diferenças relacionadas a forma (alometria). O PC2 apresentou loadings positivos e negativos para todas as espécies, um provável componente alométricos. A AVC confirmou os grupos: machos grandes, machos pequenos e fêmeas quando estes ocorreram. Nós encontramos diferentes padrões alométricos para todas as espécies com diferenças e semelhanças entre as espécies. Todos esses resultados confirmam a hipótese de polifenismo em machos e dimorfismo sexual para Ptychhoderes. A análise dos padrões alométricos para o dimorfismo sexual revelou alometria positiva para o comprimento do rostro (CR1) em machos e fêmeas, com os ventritos apenas em machos. Padrões alométricos positivos relacionados ao polifenismo nos antenômeros foram confirmados para os machos grandes e pequenos de quase todas as espécies exceto em P. nebulosus. O ancestral de clados na filogenia de Ptychoderes foi inferido para machos polifênicos (exceto P. depressus) com variáveis no rostro, antenas e ventritos indicativas de dimorfismo sexual com alometria positiva. Estes padrões poderiam estar ligados com o comportamento de proteção das fêemeas realizados por machos grandes durante a oviposição.
Resumo:
A central question in Neuroscience is that of how the nervous system generates the spatiotemporal commands needed to realize complex gestures, such as handwriting. A key postulate is that the central nervous system (CNS) builds up complex movements from a set of simpler motor primitives or control modules. In this study we examined the control modules underlying the generation of muscle activations when performing different types of movement: discrete, point-to-point movements in eight different directions and continuous figure-eight movements in both the normal, upright orientation and rotated 90 degrees. To test for the effects of biomechanical constraints, movements were performed in the frontal-parallel or sagittal planes, corresponding to two different nominal flexion/abduction postures of the shoulder. In all cases we measured limb kinematics and surface electromyographic activity (EMB) signals for seven different muscles acting around the shoulder. We first performed principal component analysis (PCA) of the EMG signals on a movement-by-movement basis. We found a surprisingly consistent pattern of muscle groupings across movement types and movement planes, although we could detect systematic differences between the PCs derived from movements performed in each sholder posture and between the principal components associated with the different orientations of the figure. Unexpectedly we found no systematic differences between the figute eights and the point-to-point movements. The first three principal components could be associated with a general co-contraction of all seven muscles plus two patterns of reciprocal activatoin. From these results, we surmise that both "discrete-rhythmic movements" such as the figure eight, and discrete point-to-point movement may be constructed from three different fundamental modules, one regulating the impedance of the limb over the time span of the movement and two others operating to generate movement, one aligned with the vertical and the other aligned with the horizontal.
Resumo:
O objetivo deste trabalho foi estabelecer um modelo empregando-se ferramentas de regressão multivariada para a previsão do teor em ésteres metílicos e, simultaneamente, de propriedades físico-químicas de misturas de óleo de soja e biodiesel de soja. O modelo foi proposto a partir da correlação das propriedades de interesse com os espectros de reflectância total atenuada no infravermelho médio das misturas. Para a determinação dos teores de ésteres metílicos foi utilizada a cromatografia líquida de alta eficiência (HPLC), podendo esta ser uma técnica alternativa aos método de referência que utilizam a cromatografia em fase gasosa (EN 14103 e EN 14105). As propriedades físico-químicas selecionadas foram índice de refração, massa específica e viscosidade. Para o estudo, foram preparadas 11 misturas com diferentes proporções de biodiesel de soja e de óleo de soja (0-100 % em massa de biodiesel de soja), em quintuplicata, totalizando 55 amostras. A região do infravermelho estudada foi a faixa de 3801 a 650 cm-1. Os espectros foram submetidos aos pré-tratamentos de correção de sinal multiplicativo (MSC) e, em seguida, à centralização na média (MC). As propriedades de interesse foram submetidas ao autoescalamento. Em seguida foi aplicada análise de componentes principais (PCA) com a finalidade de reduzir a dimensionalidade dos dados e detectar a presença de valores anômalos. Quando estes foram detectados, a amostra era descartada. Os dados originais foram submetidos ao algoritmo de Kennard-Stone dividindo-os em um conjunto de calibração, para a construção do modelo, e um conjunto de validação, para verificar a sua confiabilidade. Os resultados mostraram que o modelo proposto por PLS2 (Mínimos Quadrados Parciais) foi capaz de se ajustar bem os dados de índice de refração e de massa específica, podendo ser observado um comportamento aleatório dos erros, indicando a presença de homocedasticidade nos valores residuais, em outras palavras, o modelo construído apresentou uma capacidade de previsão para as propriedades de massa específica e índice de refração com 95% de confiança. A exatidão do modelo foi também avaliada através da estimativa dos parâmetros de regressão que são a inclinação e o intercepto pela Região Conjunta da Elipse de Confiança (EJCR). Os resultados confirmaram que o modelo MIR-PLS desenvolvido foi capaz de prever, simultaneamente, as propriedades índice de refração e massa específica. Para os teores de éteres metílicos determinados por HPLC, foi também desenvolvido um modelo MIR-PLS para correlacionar estes valores com os espectros de MIR, porém a qualidade do ajuste não foi tão boa. Apesar disso, foi possível mostrar que os dados podem ser modelados e correlacionados com os espectros de infravermelho utilizando calibração multivariada