971 resultados para Off-line TMAH-GC-MS
Resumo:
Programa de doctorado: Cibernética y telecomunicaciones
Resumo:
In this thesis, numerical methods aiming at determining the eigenfunctions, their adjoint and the corresponding eigenvalues of the two-group neutron diffusion equations representing any heterogeneous system are investigated. First, the classical power iteration method is modified so that the calculation of modes higher than the fundamental mode is possible. Thereafter, the Explicitly-Restarted Arnoldi method, belonging to the class of Krylov subspace methods, is touched upon. Although the modified power iteration method is a computationally-expensive algorithm, its main advantage is its robustness, i.e. the method always converges to the desired eigenfunctions without any need from the user to set up any parameter in the algorithm. On the other hand, the Arnoldi method, which requires some parameters to be defined by the user, is a very efficient method for calculating eigenfunctions of large sparse system of equations with a minimum computational effort. These methods are thereafter used for off-line analysis of the stability of Boiling Water Reactors. Since several oscillation modes are usually excited (global and regional oscillations) when unstable conditions are encountered, the characterization of the stability of the reactor using for instance the Decay Ratio as a stability indicator might be difficult if the contribution from each of the modes are not separated from each other. Such a modal decomposition is applied to a stability test performed at the Swedish Ringhals-1 unit in September 2002, after the use of the Arnoldi method for pre-calculating the different eigenmodes of the neutron flux throughout the reactor. The modal decomposition clearly demonstrates the excitation of both the global and regional oscillations. Furthermore, such oscillations are found to be intermittent with a time-varying phase shift between the first and second azimuthal modes.
Resumo:
This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.
Resumo:
[EN]Automatic facial analysis abilities are commonly integrated in a system by a previous off-line learning stage. In this paper we argue that a facial analysis system would improve its facial analysis capabilities based on its own experience similarly to the way a biological system, i.e. the human system, does throughout the years. The approach described, focused on gender classification, updates its knowledge according to the classification results. The presented gender experiments suggestthatthisapproachispromising,evenwhenjustashort simulationofwhatforhumanswouldtakeyearsofacquisition experience was performed.
Resumo:
Background: It is well known, since the pioneristic observation by Jenkins and Dallenbach (Am J Psychol 1924;35:605-12), that a period of sleep provides a specific advantage for the consolidation of newly acquired informations. Recent research about the possible enhancing effect of sleep on memory consolidation has focused on procedural memory (part of non-declarative memory system, according to Squire’s taxonomy), as it appears the memory sub-system for which the available data are more consistent. The acquisition of a procedural skill follows a typical time course, consisting in a substantial practice-dependent learning followed by a slow, off-line improvement. Sleep seems to play a critical role in promoting the process of slow learning, by consolidating memory traces and making them more stable and resistant to interferences. If sleep is critical for the consolidation of a procedural skill, then an alteration of the organization of sleep should result in a less effective consolidation, and therefore in a reduced memory performance. Such alteration can be experimentally induced, as in a deprivation protocol, or it can be naturally observed in some sleep disorders as, for example, in narcolepsy. In this research, a group of narcoleptic patients, and a group of matched healthy controls, were tested in two different procedural abilities, in order to better define the size and time course of sleep contribution to memory consolidation. Experimental Procedure: A Texture Discrimination Task (Karni & Sagi, Nature 1993;365:250-2) and a Finger Tapping Task (Walker et al., Neuron 2002;35:205-11) were administered to two indipendent samples of drug-naive patients with first-diagnosed narcolepsy with cataplexy (International Classification of Sleep Disorder 2nd ed., 2005), and two samples of matched healthy controls. In the Texture Discrimination task, subjects (n=22) had to learn to recognize a complex visual array on the screen of a personal computer, while in the Finger Tapping task (n=14) they had to press a numeric sequence on a standard keyboard, as quickly and accurately as possible. Three subsequent experimental sessions were scheduled for each partecipant, namely a training session, a first retrieval session the next day, and a second retrieval session one week later. To test for possible circadian effects on learning, half of the subjects performed the training session at 11 a.m. and half at 17 p.m. Performance at training session was taken as a measure of the practice-dependent learning, while performance of subsequent sessions were taken as a measure of the consolidation level achieved respectively after one and seven nights of sleep. Between training and first retrieval session, all participants spent a night in a sleep laboratory and underwent a polygraphic recording. Results and Discussion: In both experimental tasks, while healthy controls improved their performance after one night of undisturbed sleep, narcoleptic patients showed a non statistically significant learning. Despite this, at the second retrieval session either healthy controls and narcoleptics improved their skills. Narcoleptics improved relatively more than controls between first and second retrieval session in the texture discrimination ability, while their performance remained largely lower in the motor (FTT) ability. Sleep parameters showed a grater fragmentation in the sleep of the pathological group, and a different distribution of Stage 1 and 2 NREM sleep in the two groups, being thus consistent with the hypothesis of a lower consolidation power of sleep in narcoleptic patients. Moreover, REM density of the first part of the night of healthy subjects showed a significant correlation with the amount of improvement achieved at the first retrieval session in TDT task, supporting the hypothesis that REM sleep plays an important role in the consolidation of visuo-perceptual skills. Taken together, these results speak in favor of a slower, rather than lower consolidation of procedural skills in narcoleptic patients. Finally, an explanation of the results, based on the possible role of sleep in contrasting the interference provided by task repetition is proposed.
Resumo:
[EN]This paper does not propose a new technique for face representationorclassification. Insteadtheworkdescribed here investigates the evolution of an automatic system which, based on a currently common framework, and starting from an empty memory, modifies its classifiers according to experience. In the experiments we reproduce up to a certain extent the process of successive meetings. The results achieved, even when the number of different individuals is still reduced compared to off-line classifiers, are promising.
Resumo:
Aims: Ripening evaluation of two different Pecorino cheese varieties ripened according either to a traditional method in plant and in cave. Different ripening features have been analyzed in order to evaluate the cave as possible ripening environment with the aim of obtaining a peculiar product which could also establish an added value to the cultural heritage of the local place in which it has been originally manufactured. Methods and Results: Chemical-physical features of Pecorino cheese have been initially analyzed into two different ripening environments and experimentations, among which: pH, weight reduction and subsequent water activity. Furthermore, the microbial composition has been characterized in relationship with the two different ripening environments, undertaking a variety of microbial groups, such as: lactic bacteria, staphylococci, yeasts, lactococci, enterobacteria, enterococci. Besides, an additional analysis for the in-cave adaptability evaluation has been the identification of biogenic amines inside the Pecorino cheese (2-phenilethylamine, putrescine, cadaverine, hystidine, tyramine, spermine and spermidine). Further analysis were undertaken in order to track the lipid profile evolution, reporting the concentration of the cheese free fatty acids in object, in relation with ripening time, environment and production. In order to analyse the flavour compounds present in Pecorino cheese, the SPME-GC-MS technique has been widely employed. As a result, it is confirmed the trend showed by the short-chain free fatty acids, that is to say the fatty acids which are mostly involved in conveying a stronger flavor to the cheese. With the purpose of assessing the protheolytic patterns of the above-mentioned Pecorino cheese in the two different ripening environments and testing methods, the technique SDS-PAGE has been employed into the cheese insoluble fraction, whereas the SDS-PAGE technique has been carried out into the cheese soluble portion. Furthermore, different isolated belonging to various microbial groups have been genotypically characterized though the ITS-PCR technique with the aim to identify the membership species. With reference to lactic bacillus the characterized species are: Lactobacillus brevis, Lactobacillus curvatus and Lactobacillus paraplantarum. With reference to lactococci the predominant species is Lactococcus lactis, coming from the employed starter used in the cheese manufacturing. With reference to enterococcus, the predominant species are Enterococcus faecium and Enterococcus faecalis. Moreover, Streptococcus termophilus and Streptococcus macedonicus have been identified too. For staphylococci the identified species are Staphyilococcus equorum, Staphylococcus saprophyfiticus and Staphylococcus xylosus. Finally, a sensorial analysis has been undertaken through on one side a consumer test made by inexperienced consumers, and on the other side through a panel test achieved by expert consumers. From such test Pecorino cheese ripened in cave were found to be more pleasant in comparison with Pecorino cheese ripened in plant. Conclusions: The proposed approach and the undertaken analysis showed the cave as preferential ripening environment for Pecorino cheese and for the development of a more palatable product and safer for consumers’ health.
Resumo:
The interactions between outdoor bronzes and the environment, which lead to bronze corrosion, require a better understanding in order to design effective conservation strategies in the Cultural Heritage field. In the present work, investigations on real patinas of the outdoor monument to Vittorio Bottego (Parma, Italy) and laboratory studies on accelerated corrosion testing of inhibited (by silane-based films, with and without ceria nanoparticles) and non-inhibited quaternary bronzes are reported and discussed. In particular, a wet&dry ageing method was used both for testing the efficiency of the inhibitor and for patinating bronze coupons before applying the inhibitor. A wide range of spectroscopic techniques has been used, for characterizing the core metal (SEM+EDS, XRF, AAS), the corroded surfaces (SEM+EDS, portable XRF, micro-Raman, ATR-IR, Py-GC-MS) and the ageing solutions (AAS). The main conclusions were: 1. The investigations on the Bottego monument confirmed the differentiation of the corrosion products as a function of the exposure geometry, already observed in previous works, further highlighting the need to take into account the different surface features when selecting conservation procedures such as the application of inhibitors (i.e. the relative Sn enrichment in unsheltered areas requires inhibitors which effectively interact not only with Cu but also with Sn). 2. The ageing (pre-patination) cycle on coupons was able to reproduce the relative Sn enrichment that actually happens in real patinated surfaces, making the bronze specimens representative of the real support for bronze inhibitors. 3. The non-toxic silane-based inhibitors display a good protective efficiency towards pre-patinated surfaces, differently from other widely used inhibitors such as benzotriazole (BTA) and its derivatives. 4. The 3-mercapto-propyl-trimethoxy-silane (PropS-SH) additivated with CeO2 nanoparticles generally offered a better corrosion protection than PropS-SH.
Resumo:
Healthcare, Human Computer Interfaces (HCI), Security and Biometry are the most promising application scenario directly involved in the Body Area Networks (BANs) evolution. Both wearable devices and sensors directly integrated in garments envision a word in which each of us is supervised by an invisible assistant monitoring our health and daily-life activities. New opportunities are enabled because improvements in sensors miniaturization and transmission efficiency of the wireless protocols, that achieved the integration of high computational power aboard independent, energy-autonomous, small form factor devices. Application’s purposes are various: (I) data collection to achieve off-line knowledge discovery; (II) user notification of his/her activities or in case a danger occurs; (III) biofeedback rehabilitation; (IV) remote alarm activation in case the subject need assistance; (V) introduction of a more natural interaction with the surrounding computerized environment; (VI) users identification by physiological or behavioral characteristics. Telemedicine and mHealth [1] are two of the leading concepts directly related to healthcare. The capability to borne unobtrusiveness objects supports users’ autonomy. A new sense of freedom is shown to the user, not only supported by a psychological help but a real safety improvement. Furthermore, medical community aims the introduction of new devices to innovate patient treatments. In particular, the extension of the ambulatory analysis in the real life scenario by proving continuous acquisition. The wide diffusion of emerging wellness portable equipment extended the usability of wearable devices also for fitness and training by monitoring user performance on the working task. The learning of the right execution techniques related to work, sport, music can be supported by an electronic trainer furnishing the adequate aid. HCIs made real the concept of Ubiquitous, Pervasive Computing and Calm Technology introduced in the 1988 by Marc Weiser and John Seeley Brown. They promotes the creation of pervasive environments, enhancing the human experience. Context aware, adaptive and proactive environments serve and help people by becoming sensitive and reactive to their presence, since electronics is ubiquitous and deployed everywhere. In this thesis we pay attention to the integration of all the aspects involved in a BAN development. Starting from the choice of sensors we design the node, configure the radio network, implement real-time data analysis and provide a feedback to the user. We present algorithms to be implemented in wearable assistant for posture and gait analysis and to provide assistance on different walking conditions, preventing falls. Our aim, expressed by the idea to contribute at the development of a non proprietary solutions, driven us to integrate commercial and standard solutions in our devices. We use sensors available on the market and avoided to design specialized sensors in ASIC technologies. We employ standard radio protocol and open source projects when it was achieved. The specific contributions of the PhD research activities are presented and discussed in the following. • We have designed and build several wireless sensor node providing both sensing and actuator capability making the focus on the flexibility, small form factor and low power consumption. The key idea was to develop a simple and general purpose architecture for rapid analysis, prototyping and deployment of BAN solutions. Two different sensing units are integrated: kinematic (3D accelerometer and 3D gyroscopes) and kinetic (foot-floor contact pressure forces). Two kind of feedbacks were implemented: audio and vibrotactile. • Since the system built is a suitable platform for testing and measuring the features and the constraints of a sensor network (radio communication, network protocols, power consumption and autonomy), we made a comparison between Bluetooth and ZigBee performance in terms of throughput and energy efficiency. Test in the field evaluate the usability in the fall detection scenario. • To prove the flexibility of the architecture designed, we have implemented a wearable system for human posture rehabilitation. The application was developed in conjunction with biomedical engineers who provided the audio-algorithms to furnish a biofeedback to the user about his/her stability. • We explored off-line gait analysis of collected data, developing an algorithm to detect foot inclination in the sagittal plane, during walk. • In collaboration with the Wearable Lab – ETH, Zurich, we developed an algorithm to monitor the user during several walking condition where the user carry a load. The remainder of the thesis is organized as follows. Chapter I gives an overview about Body Area Networks (BANs), illustrating the relevant features of this technology and the key challenges still open. It concludes with a short list of the real solutions and prototypes proposed by academic research and manufacturers. The domain of the posture and gait analysis, the methodologies, and the technologies used to provide real-time feedback on detected events, are illustrated in Chapter II. The Chapter III and IV, respectively, shown BANs developed with the purpose to detect fall and monitor the gait taking advantage by two inertial measurement unit and baropodometric insoles. Chapter V reports an audio-biofeedback system to improve balance on the information provided by the use centre of mass. A walking assistant based on the KNN classifier to detect walking alteration on load carriage, is described in Chapter VI.
Resumo:
This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
La Muay Thai, comunemente detta “Boxe Thailandese” è un'arte marziale che rientra nella classificazione delle attività intermittenti con entrambi i sistemi energetici reclutati, aerobico e anaerobico, è inoltre caratterizzata dal fatto che il combattimento alla distanza si alterna alla lotta, denominata “clinch”. Nonostante la popolarità della Muay Thai, in ambito mondiale, stia progressivamente aumentando così come è in aumento il numero di atleti che la praticano, le ricerche incentrate su questa arte marziale e gli studi relativi agli aggiustamenti cardiometabolici nonché alle modalità temporali con cui gli specifici gesti atletici si possono succedere nel tempo durante un match, sono ancora estremamente esigui. L’oggetto del nostro studio è stato l’analisi della struttura temporale del combattimento, tramite la Match Analysis off line (analisi visiva del combattimento), con comparazione dei dati ottenuti tra il vincitore (winner) e il perdente (loser) e la valutazione dell’andamento di alcuni importanti parametri metabolici attraverso la misurazione del lattato e della HR, durante un incontro reale di Boxe Thailandese. La sperimentazione è stata condotta su un gruppo di dieci soggetti di sesso maschile, praticanti la disciplina ad un alto livello nazionale, la cui media ± deviazione standard (DS), di età, peso e altezza è di 24,6 ±4,01 anni, 69,4 ±7 kg e 174,1 ±4,3 cm. Gli atleti sono stati sottoposti, in due diverse giornate separate da almeno tre giorni, a due test; durante una prima seduta sperimentale preliminare abbiamo determinato il massimo consumo di ossigeno (VO2max) nel corso di un test sul nastro trasportare, con concomitante stima della Soglia anaerobica (SA) e misura della massima frequenza cardiaca (HR max). In una seconda seduta sperimentale abbiamo effettuato i test di combattimento in palestra e infine abbiamo analizzato i video degli incontri attraverso la Match Analysis. Dai risultati della Match - Analysis è scaturito che i vincitori hanno eseguito un numero più elevato di azioni efficaci (p < 0,05) rispetto ai non-vincitori, grazie ad un numero maggiore di combinazioni (C ) e di attacchi singoli (A) e un numero minore di difese (D) e di tecniche inefficaci. È così emerso come il livello delle realizzazioni sia quasi esclusivamente dovuto all’efficacia della tecnica e alla tattica delle azioni. Abbiamo quindi focalizzato la nostra attenzione sul clinch e sulle azioni di attacco perché si ipotizzava che potessero essere attività dispendiose e probabilmente responsabili dell’incremento di lattato durante il combattimento, dall’ analisi dei dati però non è stata riscontrata nessuna significativa correlazione tra l’andamento dei dati metabolici e le fasi di attacco e di lotta. Dai nostri risultati emerge in maniera interessante come durante le fasi attive del combattimento si siano raggiunti alti valori di lattato ematico e di frequenza cardiaca, rispettivamente di 12,55 mmol/L e di 182,68 b/min, ben oltre la SA rilevata nel test incrementale dove la HR si posizionava a 168,2 b/min. In conclusione si evidenzia come la Boxe Thailandese sia una disciplina caratterizzata da un considerevole impegno energetico-metabolico, sia aerobico che anaerobico. La predominanza del metabolismo lattacido è dimostrata dagli elevati valori di lattato osservati nel presente studio e dalla frequenza degli attacchi (8,6 ± 3,5 sec.). Questo studio potrà essere utilizzato dagli allenatori per la predisposizione di allenamenti specifici che inducano gli adattamenti propri della Muay Thai.
Resumo:
La misura della luminosità è un obiettivo importante per tutta la fisica del modello standard e per la scoperta di nuova fisica, poiché è legata alla sezione d'urto (σ) e al rate di produzione (R) di un determinato processo dalla relazione L = R*σ. Nell'eserimento ATLAS a LHC è installato un monitor di luminosità dedicato chiamato LUCID (Luminosity measurements Using Cherenkov Integrating Detector). Grazie ai dati acquisiti durante il 2010 la valutazione off-line delle performances del LUCID e l'implementazione di controlli on-line sulla qualità dei dati raccolti è stata possibile. I dati reali sono stati confrontati con i dati Monte Carlo e le simulazioni sono state opportunamente aggiustate per ottimizzare l'accordo tra i due. La calibrazione della luminosità relativa che permette di ottenere una valutazione della luminosità assoluta è stata possibile grazie ai cosiddetti Van der Meer scan, grazie ai quale è stata ottenuta una precisione dell'11%. L'analisi della fisica del decadimento della Z è in tuttora in corso per ottenere tramite il rate a cui avviene il processo una normalizzazione della luminosità con una precisione migliore del 5%.
Resumo:
Drying oils, and in particular linseed oil, were the most common binding media employed in painting between XVI and XIX centuries. Artists usually operated some pre-treatments on the oils to obtain binders with modified properties, such as different handling qualities or colour. Oil processing has a key role on the subsequent ageing of and degradation of linseed oil paints. In this thesis a multi-analytical approach was adopted to investigate the drying, polymerization and oxidative degradation of the linseed oil paints. In particular, thermogravimetry analysis (TGA), yielding information on the macromolecular scale, were compared with gas-chromatography mass-spectrometry (GC-MS) and direct exposure mass spectrometry (DEMS) providing information on the molecular scale. The study was performed on linseed oils and paint reconstructions prepared according to an accurate historical description of the painting techniques of the 19th century. TGA revealed that during ageing the molecular weight of the oils changes and that higher molecular weight fractions formed. TGA proved to be an excellent tool to compare the oils and paint reconstructions. This technique is able to highlight the different physical behaviour of oils that were processed using different methods and of paint layers on the basis of the different processed oil and /or the pigment used. GC/MS and DE-MS were used to characterise the soluble and non-polymeric fraction of the oils and paint reconstructions. GC/MS allowed us to calculate the ratios of palmitic to stearic acid (P/S), and azelaic to palmitic acid (A/P) and to evaluate effects produced by oil pre-treatments and the presence of different pigments. This helps to understand the role of the pre-treatments and of the pigments on the oxidative degradation undergone by siccative oils during ageing. DE-MS enabled the various molecular weight fractions of the samples to be simultaneously studied, and thus helped to highlight the presence of oxidation and hydrolysis reactions, and the formation of carboxylates that occur during ageing and with the changing of the oil pre-treatments and the pigments. The combination of thermal analysis with molecular techniques such as GC-MS, DEMS and FTIR enabled a model to be developed, for unravelling some crucial issues: 1) how oil pre-treatments produce binders with different physical-chemical qualities, and how this can influence the ageing of an oil paint film; 2) which is the role of the interaction between oil and pigments in the ageing and degradation process.