921 resultados para Artificial Information Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

viii
Executive Summary
The Pathways Project field studies were targeted at improving the understanding of contaminant transport along different hydrological pathways in Irish catchments, including their associated impacts on water quality and river ecology. The contaminants of interest were phosphorus, nitrogen and sediment. The working Pathways conceptual model included overland flow, interflow, shallow groundwater flow, and deep groundwater flow. This research informed the development of a set of Catchment Management Support Tools (CMSTs) comprising an Exploratory Tool, Catchment Characterization Tool (CCT) and Catchment Modelling Tool (CMT) as outlined in Pathways Project Final Reports Volumes 3 and 4.
In order to inform the CMST, four suitable study catchments were selected following an extensive selection process, namely the Mattock catchment, Co. Louth/Meath; Gortinlieve catchment, Co. Donegal; Nuenna catchment, Co. Kilkenny and the Glen Burn catchment, Co. Down. The Nuenna catchment is well drained as it is underlain by a regionally important karstified limestone aquifer with permeable limestone tills and gravels, while the other three catchments are underlain by poorly productive aquifers and low permeability clayey tills, and are poorly drained.
All catchments were instrumented, and groundwater, surface and near-surface water and aquatic ecology were monitored for a period of two years. Intensive water quality sampling during rainfall events was used to investigate the pathways delivering nutrients. The proportion of flow along each pathway was determined using chemical and physical hydrograph separation techniques, supported by numerical modelling.
The outcome of the field studies broadly supported the use of the initial four-pathway conceptual model used in the Pathways CMT (time-variant model). The artificial drainage network was found to be a significant contributing pathway in the poorly drained catchments, at low flows and during peak flows in wet antecedent conditions. The transition zone (TZ), i.e. the broken up weathered zone at the top of the bedrock, was also found to be an important pathway. It was observed to operate in two contrasting hydrogeological scenarios: in groundwater discharge zones the TZ can be regarded as being part of the shallow groundwater pathway, whereas in groundwater recharge zones it behaves more like interflow.
In the catchments overlying poorly productive aquifers, only a few fractures or fracture zones were found to be hydraulically active and the TZ, where present, was the main groundwater pathway. In the karstified Nuenna catchment, the springs, which are linked to conduits as well as to a diffuse fracture network, delivered the majority of the flow. These findings confirm the two-component groundwater contribution from bedrock but suggest that the size and nature of the hydraulically active fractures and the nature of the TZ are the dominant factors at the scale of a stream flow event.
Diffuse sources of nitrate were found to be typically delivered via the subsurface pathways, especially in the TZ and land drains in the poorly productive aquifer catchments, and via the bedrock groundwater in the Nuenna. Phosphorus was primarily transported via overland flow in both particulate and soluble forms. Where preferential flow paths existed in the soil and subsoil, soluble P, and to a lesser extent particulate P, were also transported via the TZ and in drains and ditches. Arable land was found to be the most important land use for
ix
the delivery of sediment, although channel bank and in-stream sources were the most significant in the Glen Burn catchment. Overland flow was found to be the predominant transport sediment pathway in the poorly productive catchments. These findings informed the development of the transport and attenuation equations used in the CCT and CMT. From an assessment of the relationship between physico-chemical and biological conditions, it is suggested that in the Nuenna, Glen Burn and Gortinlieve catchments, a relationship may exist between biological water quality and nitrogen concentrations, as well as with P. In the Nuenna, there was also a relationship between macroinvertebrate status and alkalinity.
Further research is recommended on the transport and delivery of phosphorus in groundwater, the transport and attenuation dynamics in the TZ in different hydrogeological settings and the relationship between macroinvertebrates and co-limiting factors. High resolution temporal and spatial sampling was found to be important for constraining the conceptual understanding of nutrient and sediment dynamics which should also be considered in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Titanium alloy exhibits an excellent combination of bio-compatibility, corrosion resistance, strength and toughness. The microstructure of an alloy influences the properties. The microstructures depend mainly on alloying elements, method of production, mechanical, and thermal treatments. The relationships between these variables and final properties of the alloy are complex, non-linear in nature, which is the biggest hurdle in developing proper correlations between them by conventional methods. So, we developed artificial neural networks (ANN) models for solving these complex phenomena in titanium alloys.

In the present work, ANN models were used for the analysis and prediction of the correlation between the process parameters, the alloying elements, microstructural features, beta transus temperature and mechanical properties in titanium alloys. Sensitivity analysis of trained neural network models were studied which resulted a better understanding of relationships between inputs and outputs. The model predictions and the analysis are well in agreement with the experimental results. The simulation results show that the average output-prediction error by models are less than 5% of the prediction range in more than 95% of the cases, which is quite acceptable for all metallurgical purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many CCTV and sensor network based intelligent surveillance systems, a number of attributes or criteria are used to individually evaluate the degree of potential threat of a suspect. The outcomes for these attributes are in general from analytical algorithms where data are often pervaded with uncertainty and incompleteness. As a result, such individual threat evaluations are often inconsistent, and individual evaluations can change as time elapses. Therefore, integrating heterogeneous threat evaluations with temporal influence to obtain a better overall evaluation is a challenging issue. So far, this issue has rarely be considered by existing event reasoning frameworks under uncertainty in sensor network based surveillance. In this paper, we first propose a weighted aggregation operator based on a set of principles that constraints the fusion of individual threat evaluations. Then, we propose a method to integrate the temporal influence on threat evaluation changes. Finally, we demonstrate the usefulness of our system with a decision support event modeling framework using an airport security surveillance scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study is to provide an alternative model approach, i.e., artificial neural network (ANN) model, to predict the compositional viscosity of binary mixtures of room temperature ionic liquids (in short as ILs) [C n-mim] [NTf 2] with n=4, 6, 8, 10 in methanol and ethanol over the entire range of molar fraction at a broad range of temperatures from T=293.0328.0K. The results show that the proposed ANN model provides alternative way to predict compositional viscosity successfully with highly improved accuracy and also show its potential to be extensively utilized to predict compositional viscosity over a wide range of temperatures and more complex viscosity compositions, i.e., more complex intermolecular interactions between components in which it would be hard or impossible to establish the analytical model. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The free fatty acid receptors (FFAs), including FFA1 (orphan name: GPR40), FFA2 (GPR43) and FFA3 (GPR41) are G protein-coupled receptors (GPCRs) involved in energy and metabolic homeostasis. Understanding the structural basis of ligand binding at FFAs is an essential step toward designing potent and selective small molecule modulators.

RESULTS: We analyse earlier homology models of FFAs in light of the newly published FFA1 crystal structure co-crystallized with TAK-875, an ago-allosteric ligand, focusing on the architecture of the extracellular binding cavity and agonist-receptor interactions. The previous low-resolution homology models of FFAs were helpful in highlighting the location of the ligand binding site and the key residues for ligand anchoring. However, homology models were not accurate in establishing the nature of all ligand-receptor contacts and the precise ligand-binding mode. From analysis of structural models and mutagenesis, it appears that the position of helices 3, 4 and 5 is crucial in ligand docking. The FFA1-based homology models of FFA2 and FFA3 were constructed and used to compare the FFA subtypes. From docking studies we propose an alternative binding mode for orthosteric agonists at FFA1 and FFA2, involving the interhelical space between helices 4 and 5. This binding mode can explain mutagenesis results for residues at positions 4.56 and 5.42. The novel FFAs structural models highlight higher aromaticity of the FFA2 binding cavity and higher hydrophilicity of the FFA3 binding cavity. The role of the residues at the second extracellular loop used in mutagenesis is reanalysed. The third positively-charged residue in the binding cavity of FFAs, located in helix 2, is identified and predicted to coordinate allosteric modulators.

CONCLUSIONS: The novel structural models of FFAs provide information on specific modes of ligand binding at FFA subtypes and new suggestions for mutagenesis and ligand modification, guiding the development of novel orthosteric and allosteric chemical probes to validate the importance of FFAs in metabolic and inflammatory conditions. Using our FFA homology modelling experience, a strategy to model a GPCR, which is phylogenetically distant from GPCRs with the available crystal structures, is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bridge construction responds to the need for environmentally friendly design of motorways and facilitates the passage through sensitive natural areas and the bypassing of urban areas. However, according to numerous research studies, bridge construction presents substantial budget overruns. Therefore, it is necessary early in the planning process for the decision makers to have reliable estimates of the final cost based on previously constructed projects. At the same time, the current European financial crisis reduces the available capital for investments and financial institutions are even less willing to finance transportation infrastructure. Consequently, it is even more necessary today to estimate the budget of high-cost construction projects -such as road bridges- with reasonable accuracy, in order for the state funds to be invested with lower risk and the projects to be designed with the highest possible efficiency. In this paper, a Bill-of-Quantities (BoQ) estimation tool for road bridges is developed in order to support the decisions made at the preliminary planning and design stages of highways. Specifically, a Feed-Forward Artificial Neural Network (ANN) with a hidden layer of 10 neurons is trained to predict the superstructure material quantities (concrete, pre-stressed steel and reinforcing steel) using the width of the deck, the adjusted length of span or cantilever and the type of the bridge as input variables. The training dataset includes actual data from 68 recently constructed concrete motorway bridges in Greece. According to the relevant metrics, the developed model captures very well the complex interrelations in the dataset and demonstrates strong generalisation capability. Furthermore, it outperforms the linear regression models developed for the same dataset. Therefore, the proposed cost estimation model stands as a useful and reliable tool for the construction industry as it enables planners to reach informed decisions for technical and economic planning of concrete bridge projects from their early implementation stages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, many side channel attacks have been published in academic literature detailing how to efficiently extract secret keys by mounting various attacks, such as differential or correlation power analysis, on cryptosystems. Among the most efficient and widely utilized leakage models involved in these attacks are the Hamming weight and distance models which give a simple, yet effective, approximation of the power consumption for many real-world systems. These leakage models reflect the number of bits switching, which is assumed proportional to the power consumption. However, the actual power consumption changing in the circuits is unlikely to be directly of that form. We, therefore, propose a non-linear leakage model by mapping the existing leakage model via a transform function, by which the changing power consumption is depicted more precisely, hence the attack efficiency can be improved considerably. This has the advantage of utilising a non-linear power model while retaining the simplicity of the Hamming weight or distance models. A modified attack architecture is then suggested to yield the correct key efficiently in practice. Finally, an empirical comparison of the attack results is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Generating timetables for an institution is a challenging and time consuming task due to different demands on the overall structure of the timetable. In this paper, a new hybrid method which is a combination of a great deluge and artificial bee colony algorithm (INMGD-ABC) is proposed to address the university timetabling problem. Artificial bee colony algorithm (ABC) is a population based method that has been introduced in recent years and has proven successful in solving various optimization problems effectively. However, as with many search based approaches, there exist weaknesses in the exploration and exploitation abilities which tend to induce slow convergence of the overall search process. Therefore, hybridization is proposed to compensate for the identified weaknesses of the ABC. Also, inspired from imperialist competitive algorithms, an assimilation policy is implemented in order to improve the global exploration ability of the ABC algorithm. In addition, Nelder–Mead simplex search method is incorporated within the great deluge algorithm (NMGD) with the aim of enhancing the exploitation ability of the hybrid method in fine-tuning the problem search region. The proposed method is tested on two differing benchmark datasets i.e. examination and course timetabling datasets. A statistical analysis t-test has been conducted and shows the performance of the proposed approach as significantly better than basic ABC algorithm. Finally, the experimental results are compared against state-of-the art methods in the literature, with results obtained that are competitive and in certain cases achieving some of the current best results to those in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-domain modelling of single-reed woodwind instruments usually involves a lumped model of the excitation mechanism. The parameters of this lumped model have to be estimated for use in numerical simulations. Several attempts have been made to estimate these parameters, including observations of the mechanics of isolated reeds, measurements under artificial or real playing conditions and estimations based on numerical simulations. In this study an optimisation routine is presented, that can estimate reed-model parameters, given the pressure and flow signals in the mouthpiece. The method is validated, tested on a series of numerically synthesised data. In order to incorporate the actions of the player in the parameter estimation process, the optimisation routine has to be applied to signals obtained under real playing conditions. The estimated parameters can then be used to resynthesise the pressure and flow signals in the mouthpiece. In the case of measured data, as opposed to numerically synthesised data, special care needs to be taken while modelling the bore of the instrument. In fact, a careful study of various experimental datasets revealed that for resynthesis to work, the bore termination impedance should be known very precisely from theory. An example is given, where the above requirement is satisfied, and the resynthesised signals closely match the original signals generated by the player.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As técnicas estatísticas são fundamentais em ciência e a análise de regressão linear é, quiçá, uma das metodologias mais usadas. É bem conhecido da literatura que, sob determinadas condições, a regressão linear é uma ferramenta estatística poderosíssima. Infelizmente, na prática, algumas dessas condições raramente são satisfeitas e os modelos de regressão tornam-se mal-postos, inviabilizando, assim, a aplicação dos tradicionais métodos de estimação. Este trabalho apresenta algumas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, em particular na estimação de modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. A investigação é desenvolvida em três vertentes, nomeadamente na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, na estimação do parâmetro ridge em regressão ridge e, por último, em novos desenvolvimentos na estimação com máxima entropia. Na estimação de eficiência técnica com fronteiras de produção condicionadas a estados contingentes, o trabalho desenvolvido evidencia um melhor desempenho dos estimadores de máxima entropia em relação ao estimador de máxima verosimilhança. Este bom desempenho é notório em modelos com poucas observações por estado e em modelos com um grande número de estados, os quais são comummente afetados por colinearidade. Espera-se que a utilização de estimadores de máxima entropia contribua para o tão desejado aumento de trabalho empírico com estas fronteiras de produção. Em regressão ridge o maior desafio é a estimação do parâmetro ridge. Embora existam inúmeros procedimentos disponíveis na literatura, a verdade é que não existe nenhum que supere todos os outros. Neste trabalho é proposto um novo estimador do parâmetro ridge, que combina a análise do traço ridge e a estimação com máxima entropia. Os resultados obtidos nos estudos de simulação sugerem que este novo estimador é um dos melhores procedimentos existentes na literatura para a estimação do parâmetro ridge. O estimador de máxima entropia de Leuven é baseado no método dos mínimos quadrados, na entropia de Shannon e em conceitos da eletrodinâmica quântica. Este estimador suplanta a principal crítica apontada ao estimador de máxima entropia generalizada, uma vez que prescinde dos suportes para os parâmetros e erros do modelo de regressão. Neste trabalho são apresentadas novas contribuições para a teoria de máxima entropia na estimação de modelos mal-postos, tendo por base o estimador de máxima entropia de Leuven, a teoria da informação e a regressão robusta. Os estimadores desenvolvidos revelam um bom desempenho em modelos de regressão linear com pequenas amostras, afetados por colinearidade e outliers. Por último, são apresentados alguns códigos computacionais para estimação com máxima entropia, contribuindo, deste modo, para um aumento dos escassos recursos computacionais atualmente disponíveis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exponential growth of the world population has led to an increase of settlements often located in areas prone to natural disasters, including earthquakes. Consequently, despite the important advances in the field of natural catastrophes modelling and risk mitigation actions, the overall human losses have continued to increase and unprecedented economic losses have been registered. In the research work presented herein, various areas of earthquake engineering and seismology are thoroughly investigated, and a case study application for mainland Portugal is performed. Seismic risk assessment is a critical link in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible numerical tools and software. In the present work, an open-source platform for seismic hazard and risk assessment is developed. This software is capable of computing the distribution of losses or damage for an earthquake scenario (deterministic event-based) or earthquake losses due to all the possible seismic events that might occur within a region for a given interval of time (probabilistic event-based). This effort has been developed following an open and transparent philosophy and therefore, it is available to any individual or institution. The estimation of the seismic risk depends mainly on three components: seismic hazard, exposure and vulnerability. The latter component assumes special importance, as by intervening with appropriate retrofitting solutions, it may be possible to decrease directly the seismic risk. The employment of analytical methodologies is fundamental in the assessment of structural vulnerability, particularly in regions where post-earthquake building damage might not be available. Several common methodologies are investigated, and conclusions are yielded regarding the method that can provide an optimal balance between accuracy and computational effort. In addition, a simplified approach based on the displacement-based earthquake loss assessment (DBELA) is proposed, which allows for the rapid estimation of fragility curves, considering a wide spectrum of uncertainties. A novel vulnerability model for the reinforced concrete building stock in Portugal is proposed in this work, using statistical information collected from hundreds of real buildings. An analytical approach based on nonlinear time history analysis is adopted and the impact of a set of key parameters investigated, including the damage state criteria and the chosen intensity measure type. A comprehensive review of previous studies that contributed to the understanding of the seismic hazard and risk for Portugal is presented. An existing seismic source model was employed with recently proposed attenuation models to calculate probabilistic seismic hazard throughout the territory. The latter results are combined with information from the 2011 Building Census and the aforementioned vulnerability model to estimate economic loss maps for a return period of 475 years. These losses are disaggregated across the different building typologies and conclusions are yielded regarding the type of construction more vulnerable to seismic activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Apesar das recentes inovações tecnológicas, o setor dos transportes continua a exercer impactes significativos sobre a economia e o ambiente. Com efeito, o sucesso na redução das emissões neste setor tem sido inferior ao desejável. Isto deve-se a diferentes fatores como a dispersão urbana e a existência de diversos obstáculos à penetração no mercado de tecnologias mais limpas. Consequentemente, a estratégia “Europa 2020” evidencia a necessidade de melhorar a eficiência no uso das atuais infraestruturas rodoviárias. Neste contexto, surge como principal objetivo deste trabalho, a melhoria da compreensão de como uma escolha de rota adequada pode contribuir para a redução de emissões sob diferentes circunstâncias espaciais e temporais. Simultaneamente, pretende-se avaliar diferentes estratégias de gestão de tráfego, nomeadamente o seu potencial ao nível do desempenho e da eficiência energética e ambiental. A integração de métodos empíricos e analíticos para avaliação do impacto de diferentes estratégias de otimização de tráfego nas emissões de CO2 e de poluentes locais constitui uma das principais contribuições deste trabalho. Esta tese divide-se em duas componentes principais. A primeira, predominantemente empírica, baseou-se na utilização de veículos equipados com um dispositivo GPS data logger para recolha de dados de dinâmica de circulação necessários ao cálculo de emissões. Foram percorridos aproximadamente 13200 km em várias rotas com escalas e características distintas: área urbana (Aveiro), área metropolitana (Hampton Roads, VA) e um corredor interurbano (Porto-Aveiro). A segunda parte, predominantemente analítica, baseou-se na aplicação de uma plataforma integrada de simulação de tráfego e emissões. Com base nesta plataforma, foram desenvolvidas funções de desempenho associadas a vários segmentos das redes estudadas, que por sua vez foram aplicadas em modelos de alocação de tráfego. Os resultados de ambas as perspetivas demonstraram que o consumo de combustível e emissões podem ser significativamente minimizados através de escolhas apropriadas de rota e sistemas avançados de gestão de tráfego. Empiricamente demonstrou-se que a seleção de uma rota adequada pode contribuir para uma redução significativa de emissões. Foram identificadas reduções potenciais de emissões de CO2 até 25% e de poluentes locais até 60%. Através da aplicação de modelos de tráfego demonstrou-se que é possível reduzir significativamente os custos ambientais relacionados com o tráfego (até 30%), através da alteração da distribuição dos fluxos ao longo de um corredor com quatro rotas alternativas. Contudo, apesar dos resultados positivos relativamente ao potencial para a redução de emissões com base em seleções de rotas adequadas, foram identificadas algumas situações de compromisso e/ou condicionantes que devem ser consideradas em futuros sistemas de eco navegação. Entre essas condicionantes importa salientar que: i) a minimização de diferentes poluentes pode implicar diferentes estratégias de navegação, ii) a minimização da emissão de poluentes, frequentemente envolve a escolha de rotas urbanas (em áreas densamente povoadas), iii) para níveis mais elevados de penetração de dispositivos de eco-navegação, os impactos ambientais em todo o sistema podem ser maiores do que se os condutores fossem orientados por dispositivos tradicionais focados na minimização do tempo de viagem. Com este trabalho demonstrou-se que as estratégias de gestão de tráfego com o intuito da minimização das emissões de CO2 são compatíveis com a minimização do tempo de viagem. Por outro lado, a minimização de poluentes locais pode levar a um aumento considerável do tempo de viagem. No entanto, dada a tendência de redução nos fatores de emissão dos poluentes locais, é expectável que estes objetivos contraditórios tendam a ser minimizados a médio prazo. Afigura-se um elevado potencial de aplicação da metodologia desenvolvida, seja através da utilização de dispositivos móveis, sistemas de comunicação entre infraestruturas e veículos e outros sistemas avançados de gestão de tráfego.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this work was to monitor a set of physical-chemical properties of heavy oil procedural streams through nuclear magnetic resonance spectroscopy, in order to propose an analysis procedure and online data processing for process control. Different statistical methods which allow to relate the results obtained by nuclear magnetic resonance spectroscopy with the results obtained by the conventional standard methods during the characterization of the different streams, have been implemented in order to develop models for predicting these same properties. The real-time knowledge of these physical-chemical properties of petroleum fractions is very important for enhancing refinery operations, ensuring technically, economically and environmentally proper refinery operations. The first part of this work involved the determination of many physical-chemical properties, at Matosinhos refinery, by following some standard methods important to evaluate and characterize light vacuum gas oil, heavy vacuum gas oil and fuel oil fractions. Kinematic viscosity, density, sulfur content, flash point, carbon residue, P-value and atmospheric and vacuum distillations were the properties analysed. Besides the analysis by using the standard methods, the same samples were analysed by nuclear magnetic resonance spectroscopy. The second part of this work was related to the application of multivariate statistical methods, which correlate the physical-chemical properties with the quantitative information acquired by nuclear magnetic resonance spectroscopy. Several methods were applied, including principal component analysis, principal component regression, partial least squares and artificial neural networks. Principal component analysis was used to reduce the number of predictive variables and to transform them into new variables, the principal components. These principal components were used as inputs of the principal component regression and artificial neural networks models. For the partial least squares model, the original data was used as input. Taking into account the performance of the develop models, by analysing selected statistical performance indexes, it was possible to conclude that principal component regression lead to worse performances. When applying the partial least squares and artificial neural networks models better results were achieved. However, it was with the artificial neural networks model that better predictions were obtained for almost of the properties analysed. With reference to the results obtained, it was possible to conclude that nuclear magnetic resonance spectroscopy combined with multivariate statistical methods can be used to predict physical-chemical properties of petroleum fractions. It has been shown that this technique can be considered a potential alternative to the conventional standard methods having obtained very promising results.