6 resultados para Process control Statistical methods
em Repositório Institucional da Universidade de Aveiro - Portugal
Resumo:
The work reported in this thesis aimed at applying the methodology known as metabonomics to the detailed study of a particular type of beer and its quality control, with basis on the use of multivariate analysis (MVA) to extract meaningful information from given analytical data sets. In Chapter 1, a detailed description of beer is given considering the brewing process, main characteristics and typical composition of beer, beer stability and the commonly used analytical techniques for beer analysis. The fundamentals of the analytical methods employed here, namely nuclear magnetic resonance (NMR) spectroscopy, gas-chromatography-mass spectrometry (GC-MS) and mid-infrared (MIR) spectroscopy, together with the description of the metabonomics methodology are described shortly in Chapter 2. In Chapter 3, the application of high resolution NMR to characterize the chemical composition of a lager beer is described. The 1H NMR spectrum obtained by direct analysis of beer show a high degree of complexity, confirming the great potential of NMR spectroscopy for the detection of a wide variety of families of compounds, in a single run. Spectral assignment was carried out by 2D NMR, resulting in the identification of about 40 compounds, including alcohols, amino acids, organic acids, nucleosides and sugars. In a second part of Chapter 3, the compositional variability of beer was assessed. For that purpose, metabonomics was applied to 1H NMR data (NMR/MVA) to evaluate beer variability between beers from the same brand (lager), produced nationally but differing in brewing site and date of production. Differences between brewing sites and/or dates were observed, reflecting compositional differences related to particular processing steps, including mashing, fermentation and maturation. Chapter 4 describes the quantification of organic acids in beer by NMR, using different quantitative methods: direct integration of NMR signals (vs. internal reference or vs. an external electronic reference, ERETIC method) and by quantitative statistical methods (using the partial least squares (PLS) regression) were developed and compared. PLS1 regression models were built using different quantitative methods as reference: capillary electrophoresis with direct and indirect detection and enzymatic essays. It was found that NMR integration results generally agree with those obtained by the best performance PLS models, although some overestimation for malic and pyruvic acids and an apparent underestimation for citric acid were observed. Finally, Chapter 5 describes metabonomic studies performed to better understand the forced aging (18 days, at 45 ºC) beer process. The aging process of lager beer was followed by i) NMR, ii) GC-MS, and iii) MIR spectroscopy. MVA methods of each analytical data set revealed clear separation between different aging days for both NMR and GC-MS data, enabling the identification of compounds closely related with the aging process: 5-hydroxymethylfurfural (5-HMF), organic acids, γ-amino butyric acid (GABA), proline and the ratio linear/branched dextrins (NMR domain) and 5-HMF, furfural, diethyl succinate and phenylacetaldehyde (known aging markers) and, for the first time, 2,3-dihydro-3,5-dihydroxy-6-methyl-4(H)-pyran-4-one xii (DDMP) and maltoxazine (by GC-MS domain). For MIR/MVA, no aging trend could be measured, the results reflecting the need of further experimental optimizations. Data correlation between NMR and GC-MS data was performed by outer product analysis (OPA) and statistical heterospectroscopy (SHY) methodologies, enabling the identification of further compounds (11 compounds, 5 of each are still unassigned) highly related with the aging process. Data correlation between sensory characteristics and NMR and GC-MS was also assessed through PLS1 regression models using the sensory response as reference. The results obtained showed good relationships between analytical data response and sensory response, particularly for the aromatic region of the NMR spectra and for GC-MS data (r > 0.89). However, the prediction power of all built PLS1 regression models was relatively low, possibly reflecting the low number of samples/tasters employed, an aspect to improve in future studies.
Resumo:
The main objective of this work was to monitor a set of physical-chemical properties of heavy oil procedural streams through nuclear magnetic resonance spectroscopy, in order to propose an analysis procedure and online data processing for process control. Different statistical methods which allow to relate the results obtained by nuclear magnetic resonance spectroscopy with the results obtained by the conventional standard methods during the characterization of the different streams, have been implemented in order to develop models for predicting these same properties. The real-time knowledge of these physical-chemical properties of petroleum fractions is very important for enhancing refinery operations, ensuring technically, economically and environmentally proper refinery operations. The first part of this work involved the determination of many physical-chemical properties, at Matosinhos refinery, by following some standard methods important to evaluate and characterize light vacuum gas oil, heavy vacuum gas oil and fuel oil fractions. Kinematic viscosity, density, sulfur content, flash point, carbon residue, P-value and atmospheric and vacuum distillations were the properties analysed. Besides the analysis by using the standard methods, the same samples were analysed by nuclear magnetic resonance spectroscopy. The second part of this work was related to the application of multivariate statistical methods, which correlate the physical-chemical properties with the quantitative information acquired by nuclear magnetic resonance spectroscopy. Several methods were applied, including principal component analysis, principal component regression, partial least squares and artificial neural networks. Principal component analysis was used to reduce the number of predictive variables and to transform them into new variables, the principal components. These principal components were used as inputs of the principal component regression and artificial neural networks models. For the partial least squares model, the original data was used as input. Taking into account the performance of the develop models, by analysing selected statistical performance indexes, it was possible to conclude that principal component regression lead to worse performances. When applying the partial least squares and artificial neural networks models better results were achieved. However, it was with the artificial neural networks model that better predictions were obtained for almost of the properties analysed. With reference to the results obtained, it was possible to conclude that nuclear magnetic resonance spectroscopy combined with multivariate statistical methods can be used to predict physical-chemical properties of petroleum fractions. It has been shown that this technique can be considered a potential alternative to the conventional standard methods having obtained very promising results.
Resumo:
This thesis reports the application of metabolomics to human tissues and biofluids (blood plasma and urine) to unveil the metabolic signature of primary lung cancer. In Chapter 1, a brief introduction on lung cancer epidemiology and pathogenesis, together with a review of the main metabolic dysregulations known to be associated with cancer, is presented. The metabolomics approach is also described, addressing the analytical and statistical methods employed, as well as the current state of the art on its application to clinical lung cancer studies. Chapter 2 provides the experimental details of this work, in regard to the subjects enrolled, sample collection and analysis, and data processing. In Chapter 3, the metabolic characterization of intact lung tissues (from 56 patients) by proton High Resolution Magic Angle Spinning (HRMAS) Nuclear Magnetic Resonance (NMR) spectroscopy is described. After careful assessment of acquisition conditions and thorough spectral assignment (over 50 metabolites identified), the metabolic profiles of tumour and adjacent control tissues were compared through multivariate analysis. The two tissue classes could be discriminated with 97% accuracy, with 13 metabolites significantly accounting for this discrimination: glucose and acetate (depleted in tumours), together with lactate, alanine, glutamate, GSH, taurine, creatine, phosphocholine, glycerophosphocholine, phosphoethanolamine, uracil nucleotides and peptides (increased in tumours). Some of these variations corroborated typical features of cancer metabolism (e.g., upregulated glycolysis and glutaminolysis), while others suggested less known pathways (e.g., antioxidant protection, protein degradation) to play important roles. Another major and novel finding described in this chapter was the dependence of this metabolic signature on tumour histological subtype. While main alterations in adenocarcinomas (AdC) related to phospholipid and protein metabolisms, squamous cell carcinomas (SqCC) were found to have stronger glycolytic and glutaminolytic profiles, making it possible to build a valid classification model to discriminate these two subtypes. Chapter 4 reports the NMR metabolomic study of blood plasma from over 100 patients and near 100 healthy controls, the multivariate model built having afforded a classification rate of 87%. The two groups were found to differ significantly in the levels of lactate, pyruvate, acetoacetate, LDL+VLDL lipoproteins and glycoproteins (increased in patients), together with glutamine, histidine, valine, methanol, HDL lipoproteins and two unassigned compounds (decreased in patients). Interestingly, these variations were detected from initial disease stages and the magnitude of some of them depended on the histological type, although not allowing AdC vs. SqCC discrimination. Moreover, it is shown in this chapter that age mismatch between control and cancer groups could not be ruled out as a possible confounding factor, and exploratory external validation afforded a classification rate of 85%. The NMR profiling of urine from lung cancer patients and healthy controls is presented in Chapter 5. Compared to plasma, the classification model built with urinary profiles resulted in a superior classification rate (97%). After careful assessment of possible bias from gender, age and smoking habits, a set of 19 metabolites was proposed to be cancer-related (out of which 3 were unknowns and 6 were partially identified as N-acetylated metabolites). As for plasma, these variations were detected regardless of disease stage and showed some dependency on histological subtype, the AdC vs. SqCC model built showing modest predictive power. In addition, preliminary external validation of the urine-based classification model afforded 100% sensitivity and 90% specificity, which are exciting results in terms of potential for future clinical application. Chapter 6 describes the analysis of urine from a subset of patients by a different profiling technique, namely, Ultra-Performance Liquid Chromatography coupled to Mass Spectrometry (UPLC-MS). Although the identification of discriminant metabolites was very limited, multivariate models showed high classification rate and predictive power, thus reinforcing the value of urine in the context of lung cancer diagnosis. Finally, the main conclusions of this thesis are presented in Chapter 7, highlighting the potential of integrated metabolomics of tissues and biofluids to improve current understanding of lung cancer altered metabolism and to reveal new marker profiles with diagnostic value.
Resumo:
Os sistemas distribuídos embarcados (Distributed Embedded Systems – DES) têm sido usados ao longo dos últimos anos em muitos domínios de aplicação, da robótica, ao controlo de processos industriais passando pela aviónica e pelas aplicações veiculares, esperando-se que esta tendência continue nos próximos anos. A confiança no funcionamento é uma propriedade importante nestes domínios de aplicação, visto que os serviços têm de ser executados em tempo útil e de forma previsível, caso contrário, podem ocorrer danos económicos ou a vida de seres humanos poderá ser posta em causa. Na fase de projecto destes sistemas é impossível prever todos os cenários de falhas devido ao não determinismo do ambiente envolvente, sendo necessária a inclusão de mecanismos de tolerância a falhas. Adicionalmente, algumas destas aplicações requerem muita largura de banda, que também poderá ser usada para a evolução dos sistemas, adicionandolhes novas funcionalidades. A flexibilidade de um sistema é uma propriedade importante, pois permite a sua adaptação às condições e requisitos envolventes, contribuindo também para a simplicidade de manutenção e reparação. Adicionalmente, nos sistemas embarcados, a flexibilidade também é importante por potenciar uma melhor utilização dos, muitas vezes escassos, recursos existentes. Uma forma evidente de aumentar a largura de banda e a tolerância a falhas dos sistemas embarcados distribuídos é a replicação dos barramentos do sistema. Algumas soluções existentes, quer comerciais quer académicas, propõem a replicação dos barramentos para aumento da largura de banda ou para aumento da tolerância a falhas. No entanto e quase invariavelmente, o propósito é apenas um, sendo raras as soluções que disponibilizam uma maior largura de banda e um aumento da tolerância a falhas. Um destes raros exemplos é o FlexRay, com a limitação de apenas ser permitido o uso de dois barramentos. Esta tese apresentada e discute uma proposta para usar a replicação de barramentos de uma forma flexível com o objectivo duplo de aumentar a largura de banda e a tolerância a falhas. A flexibilidade dos protocolos propostos também permite a gestão dinâmica da topologia da rede, sendo o número de barramentos apenas limitado pelo hardware/software. As propostas desta tese foram validadas recorrendo ao barramento de campo CAN – Controller Area Network, escolhido devido à sua grande implantação no mercado. Mais especificamente, as soluções propostas foram implementadas e validadas usando um paradigma que combina flexibilidade com comunicações event-triggered e time-triggered: o FTT – Flexible Time- Triggered. No entanto, uma generalização para CAN nativo é também apresentada e discutida. A inclusão de mecanismos de replicação do barramento impõe a alteração dos antigos protocolos de replicação e substituição do nó mestre, bem como a definição de novos protocolos para esta finalidade. Este trabalho tira partido da arquitectura centralizada e da replicação do nó mestre para suportar de forma eficiente e flexível a replicação de barramentos. Em caso de ocorrência de uma falta num barramento (ou barramentos) que poderia provocar uma falha no sistema, os protocolos e componentes propostos nesta tese fazem com que o sistema reaja, mudando para um modo de funcionamento degradado. As mensagens que estavam a ser transmitidas nos barramentos onde ocorreu a falta são reencaminhadas para os outros barramentos. A replicação do nó mestre baseia-se numa estratégia líder-seguidores (leaderfollowers), onde o líder (leader) controla todo o sistema enquanto os seguidores (followers) servem como nós de reserva. Se um erro ocorrer no nó líder, um dos nós seguidores passará a controlar o sistema de uma forma transparente e mantendo as mesmas funcionalidades. As propostas desta tese foram também generalizadas para CAN nativo, tendo sido para tal propostos dois componentes adicionais. É, desta forma possível ter as mesmas capacidades de tolerância a falhas ao nível dos barramentos juntamente com a gestão dinâmica da topologia de rede. Todas as propostas desta tese foram implementadas e avaliadas. Uma implementação inicial, apenas com um barramento foi avaliada recorrendo a uma aplicação real, uma equipa de futebol robótico onde o protocolo FTT-CAN foi usado no controlo de movimento e da odometria. A avaliação do sistema com múltiplos barramentos foi feita numa plataforma de teste em laboratório. Para tal foi desenvolvido um sistema de injecção de faltas que permite impor faltas nos barramentos e nos nós mestre, e um sistema de medida de atrasos destinado a medir o tempo de resposta após a ocorrência de uma falta.
Resumo:
No contexto econômico competitivo e globalizado no qual as corporações estão inseridas, emerge a necessidade de evolução constante para acompanhar as mudanças que o ambiente lhes impõe, visando a sustentabilidade e a perpetuidade. A evolução econômica e financeira das corporações pode promover o desenvolvimento de uma nação, mesmo que o aumento da concorrência no mercado obrigue-as a investirem em novas relações com o seu universo, buscando melhorar os seus níveis de desempenho mensurados por meio de novos instrumentos economicos e financeiros. Desta forma, o grau de investimento corporativo passa a ser relevante, pois pode gerar confiança em novos investimentos, sendo visto como sinônimo de economia forte. No concernente ao objetivo, esta tese teve como escopo o desenvolvimento de um indicador econômico e financeiro visando balizar o grau de credibilidade rating que as corporações apresentam em sua estrutura corporativa, por meio de um conjunto de índices econômicos e financeiros ligados à liquidez, à lucratividade, ao endividamento e à rentabilidade, provindos das demonstrações econômicas e financeiras das corporações estudadas. Este estudo caracteriza-se no contexto da tipologia aplicada, de objetivo descritivo com delineamento bibliográfico, na amplitude da problemática, caracteriza-se como quantitativo, compreendendo a população de 70 corporações brasileiras reconhecidas pelas certificadoras internacionais, Standard & Poor's, Moody's e Fitch Ratings, as quais detinham o grau de investimento corporativo no ano de 2008. Quanto aos métodos e procedimentos estatísticos, primeiramente utilizou-se a análise descritiva com vistas ao resumo dos dados, posteriormente foi feita a análise de correlação por meio do Coeficiente de Correlação Linear de Pearson, aplicando-se em seguida a análise de regressão. Em seguida para a confecção do modelo utilizou-se a análise fatorial e para testificar sua confiabilidade o Alfa de Cronbach, utilizou-se também a análise discriminante, para classificação dos quartis. As conclusões do estudo baseiamse nos resultados apresentados pela evolução do tratamento estatístico, que inicialmente apresentam uma correlação predominantemente fraca, no entanto isto não invalida a correlação de Pearson, pois todos os coeficientes apresentaram uma significância de (p<0,05). Na aplicação da análise de regressão, todos os modelos apresentaram resultados satisfatórios sendo perceptível a existência de uma forte correlação. A confiabilidade do modelo de grau de investimento corporativo provindo da análise fatorial foi testificada pelo coeficiente do Alpha de Cronbach, que apresentou valor de 0,768, o que indica consistência interna satisfatória ao estudo. O grau de investimento na base longitudinal de 2008 a 2010 apresenta variabilidade de 95,72% a 98,33% de acertividade. Portanto, conclui-se que o indicador criado por este estudo, possui condições de ser utilizado como base de definição do grau de investimento de corporações empresariais.
Resumo:
Urban soil quality may be severely affected by hydrophobic organic contaminants (HOCs), impairing environmental quality and human health. A comprehensive study was conducted in two contrasting Portuguese urban areas (Lisbon and Viseu) in order to assess the levels and potential risks of these contaminants, to identify sources and study their behaviour in soils. The concentrations of HOCs were related to the size of the city, with much higher contamination levels observed in Lisbon urban area. Source apportionment was performed by studying the HOCs profiles, their relationship with potentially toxic elements and general characteristics of soil using multivariate statistical methods. Lisbon seems to be affected by nearby sources (traffic, industry and incineration processes) whereas in Viseu the atmospheric transport may be playing an important role. In a first tier of risk assessment (RA) it was possible to identify polycyclic aromatic hydrocarbons (PAHs) in Lisbon soils as a potential hazard. The levels of PAHs in street dusts were further studied and allowed to clarify that traffic, tire and pavement debris can be an important source of PAHs to urban soils. Street dusts were also identified as being a potential concern regarding human and environmental health, especially if reaching the nearby aquatic bodies. Geostatistical tools were also used and their usefulness in a RA analysis and urban planning was discussed. In order to obtain a more realistic assessment of risks of HOCs to environment and human health it is important to evaluate their available fraction, which is also the most accessible for organisms. Therefore, a review of the processes involved on the availability of PAHs was performed and the outputs produced by the different chemical methods were evaluated. The suitability of chemical methods to predict bioavailability of PAHs in dissimilar naturally contaminated soils has not been demonstrated, being especially difficult for high molecular weight compounds. No clear relationship between chemical and biological availability was found in this work. Yet, in spite of the very high total concentrations found in some Lisbon soils, both the water soluble fraction and the body residues resulting from bioaccumulation assays were generally very low, which may be due to aging phenomena. It was observed that the percentage of soluble fraction of PAHs in soils was found to be different among compounds and mostly regulated by soil properties. Regarding bioaccumulation assays, although no significant relationship was found between soil properties and bioavailability, it was verified that biota-to-soil bioaccumulation factors were sample dependent rather than compound dependent. In conclusion, once the compounds of potential concern are targeted, then performing a chemical screening as a first tier can be a simple and effective approach to start a RA. However, reliable data is still required to improve the existing models for risk characterization.