20 resultados para Analytical approach

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Drying oils, and in particular linseed oil, were the most common binding media employed in painting between XVI and XIX centuries. Artists usually operated some pre-treatments on the oils to obtain binders with modified properties, such as different handling qualities or colour. Oil processing has a key role on the subsequent ageing of and degradation of linseed oil paints. In this thesis a multi-analytical approach was adopted to investigate the drying, polymerization and oxidative degradation of the linseed oil paints. In particular, thermogravimetry analysis (TGA), yielding information on the macromolecular scale, were compared with gas-chromatography mass-spectrometry (GC-MS) and direct exposure mass spectrometry (DEMS) providing information on the molecular scale. The study was performed on linseed oils and paint reconstructions prepared according to an accurate historical description of the painting techniques of the 19th century. TGA revealed that during ageing the molecular weight of the oils changes and that higher molecular weight fractions formed. TGA proved to be an excellent tool to compare the oils and paint reconstructions. This technique is able to highlight the different physical behaviour of oils that were processed using different methods and of paint layers on the basis of the different processed oil and /or the pigment used. GC/MS and DE-MS were used to characterise the soluble and non-polymeric fraction of the oils and paint reconstructions. GC/MS allowed us to calculate the ratios of palmitic to stearic acid (P/S), and azelaic to palmitic acid (A/P) and to evaluate effects produced by oil pre-treatments and the presence of different pigments. This helps to understand the role of the pre-treatments and of the pigments on the oxidative degradation undergone by siccative oils during ageing. DE-MS enabled the various molecular weight fractions of the samples to be simultaneously studied, and thus helped to highlight the presence of oxidation and hydrolysis reactions, and the formation of carboxylates that occur during ageing and with the changing of the oil pre-treatments and the pigments. The combination of thermal analysis with molecular techniques such as GC-MS, DEMS and FTIR enabled a model to be developed, for unravelling some crucial issues: 1) how oil pre-treatments produce binders with different physical-chemical qualities, and how this can influence the ageing of an oil paint film; 2) which is the role of the interaction between oil and pigments in the ageing and degradation process.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis reports an integrated analytical approach for the study of physicochemical and biological properties of new synthetic bile acid (BA) analogues agonists of FXR and TGR5 receptors. Structure-activity data were compared with those previous obtained using the same experimental protocols on synthetic and natural occurring BA. The new synthetic BA analogues are classified in different groups according also to their potency as a FXR and TGR5 agonists: unconjugated and steroid modified BA and side chain modified BA including taurine or glycine conjugates and pseudo-conjugates (sulphonate and sulphate analogues). In order to investigate the relationship between structure and activity the synthetic analogues where admitted to a physicochemical characterization and to a preliminary screening for their pharmacokinetic and metabolism using a bile fistula rat model. Sensitive and accurate analytical methods have been developed for the quali-quantitative analysis of BA in biological fluids and sample used for physicochemical studies. Combined High Performance Liquid Chromatography Electrospray tandem mass spectrometry with efficient chromatographic separation of all studied BA and their metabolites have been optimized and validated. Analytical strategies for the identification of the BA and their minor metabolites have been developed. Taurine and glycine conjugates were identified in MS/MS by monitoring the specific ion transitions in multiple reaction monitoring (MRM) mode while all other metabolites (sulphate, glucuronic acid, dehydroxylated, decarboxylated or oxo) were monitored in a selected-ion reaction (SIR) mode with a negative ESI interface by the following ions. Accurate and precise data where achieved regarding the main physicochemical properties including solubility, detergency, lipophilicity and albumin binding . These studies have shown that minor structural modification greatly affect the pharmacokinetics and metabolism of the new analogues in respect to the natural BA and on turn their site of action, particularly where their receptor are located in the enterohepatic circulation.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Hydrogen sulfide (H2S) is a widely recognized gasotransmitter, with key roles in physiological and pathological processes. The accurate quantification of H2S and reactive sulfur species (RSS) may hold important implications for the diagnosis and prognosis of various diseases. However, H2S species quantification in biological matrices is still a challenge. Among the sulfide detection methods, monobromobimane (MBB) derivatization coupled with reversed phase high-performance liquid chromatography (RP-HPLC) is one of the most reported. However, it is characterized by a complex preparation and time-consuming process, which may alter the actual H2S level. Moreover, quantitative validation has still not been described based on a survey of previously published works. In this study, we developed and validated an improved analytical protocol for the MBB RP-HPLC method. Main parameters like MBB concentration, temperature, reaction time, and sample handling were optimized, and the calibration method was further validated using leave-one-out cross-validation (CV) and tested in a clinical setting. The method shows high sensitivity and allows the quantification of H2S species, with a limit of detection (LOD) of 0.5 µM and a limit of quantification (LOQ) of 0.9 µM. Additionally, this model was successfully applied in measurements of H2S levels in the serum of patients subjected to inhalation with vapors rich in H2S. In addition, a properly procedure was established for H2S release with the modified MBB HPLC-FLD method. The proposed analytical approach demonstrated the slow-release kinetics of H2S from the multilayer Silk-Fibroin scaffolds with the combination of different H2S donor’s concentration with respect to the weight of PLGA nanofiber. In the end, some efforts were made on sulfide measurements by using size exclusion chromatography fluorescence/ultraviolet detection and inductively coupled plasma-mass spectrometry (SEC-FLD/UV-ICP/MS). It’s intended as a preliminary study in order to define the feasibility of a separation-detection-quantification platform to analyze biological samples and quantify sulfur species.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Colourants are substances used to change the colour of something, and are classified in three typology of colorants: a) pigments, b) dyes, and c) lakes and hybrid pigments. Their identification is very important when studying cultural heritage; it gives information about the artistic technique, can help in dating, and offers insights on the condition of the object. Besides, the study of the degradation phenomena constitutes a framework for the preventive conservation strategies, provides evidence of the object's original appearance, and contributes to the authentication of works of art. However, the complexity of these systems makes it impossible to achieve a complete understanding using a single technique, making necessary a multi-analytical approach. This work focuses on the set-up and application of advanced spectroscopic methods for the study of colourants in cultural heritage. The first chapter presents the identification of modern synthetic organic pigments using Metal Underlayer-ATR (MU-ATR), and the characterization of synthetic dyes extracted from wool fibres using a combination of Thin Layer Chromatography (TLC) coupled to MU-ATR using AgI@Au plates. The second chapter presents the study of the effect of metallic Ag in the photo-oxidation process of orpiment, and the influence of the different factors, such as light and relative humidity. We used a combination of vibrational and synchrotron radiation-based X-ray microspectroscopy techniques: µ-ATR-FT-IR, µ-Raman, SR-µ-XRF, µ-XANES at S K-, Ag L3- and As K-edges and SR-µ-XRD. The third chapter presents the study of metal carboxylates in paintings, specifically on the formation of Zn and Pb carboxylates in three different binders: stand linseed oil, whole egg, and beeswax. We used micro-ATR-FT-IR, macro FT-IR in total reflection (rMA-FT-IR), portable Near-Infrared spectroscopy (NIR), macro X-ray Powder Diffraction (MA-XRPD), XRPD, and Gas Chromatography Mass-Spectrometry (GC-MS). For the data processing, we explored the data from rMA-FT-IR and NIR with the Principal Component Analysis (PCA).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work thesis focuses on the Helicon Plasma Thruster (HPT) as a candidate for generating thrust for small satellites and CubeSats. Two main topics are addressed: the development of a Global Model (GM) and a 3D self-consistent numerical tool. The GM is suitable for preliminary analysis of HPTs with noble gases such as argon, neon, krypton, and xenon, and alternative propellants such as air and iodine. A lumping methodology is developed to reduce the computational cost when modelling the excited species in the plasma chemistry. A 3D self-consistent numerical tool is also developed that can treat discharges with a generic 3D geometry and model the actual plasma-antenna coupling. The tool consists of two main modules, an EM module and a FLUID module, which run iteratively until a steady state solution is converged. A third module is available for solving the plume with a simplified semi-analytical approach, a PIC code, or directly by integration of the fluid equations. Results obtained from both the numerical tools are benchmarked against experimental measures of HPTs or Helicon reactors, obtaining very good qualitative agreement with the experimental trend for what concerns the GM, and an excellent agreement of the physical trends predicted against the measured data for the 3D numerical strategy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This PhD dissertation presents a profound study of the vulnerability of buildings and non-structural elements stemming from the investigation of the Mw 5.2 Lorca 2011 earthquake; which constitutes one of the most significant earthquakes in Spain. It left nine fatalities due to falling debris from reinforced concrete buildings, 394 injured and material damage valued at 800 million euros. Within this framework, the most relevant initiatives concerning the vulnerability of buildings and the exposure of Lorca are studied. This work revealed two lines of research: the elaboration of a rational method to determine the adequacy of a specific fragility curve for the particular seismic risk study of a region; and the relevance of researching the seismic performance of non-structural elements. As a consequence, firstly, a method to assess and select fragility curves for seismic risk studies from the catalogue of those available in the literature is elaborated and calibrated by means of a case study. The said methodology is based on a multidimensional index and provides a ranking that classifies the curves in terms of adequacy. Its results for the case of Lorca led to the elaboration of new fragility curves for unreinforced masonry buildings. Moreover, a simplified method to account for the unpredictable directionality of the seism in the creation of fragility curves is contributed. Secondly, the characterisation of the seismic capacity and demand of the non-structural elements that caused most of the human losses is studied. Concerning the capacity, an analytical approach derived from theoretical considerations to characterise the complete out-of-plane seismic response curve of unreinforced masonry cantilever walls is provided; as well as a simplified and more practical trilinear version of it. Concerning the demand, several methods for characterising the Floor Response Spectra of reinforced concrete buildings are tested through case studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, the seismic vulnerability of existing masonry buildings has been underscored by the destructive impacts of earthquakes. Therefore, Fibre Reinforced Cementitious Matrix (FRCM) retrofitting systems have gained prominence due to their high strength-to-weight ratio, compatibility with substrates, and potential reversibility. However, concerns linger regarding the durability of these systems when subjected to long-term environmental conditions. This doctoral dissertation addressed these concerns by studying the effects of mild temperature variations on three FRCM systems, featuring basalt, glass, and aramid fibre textiles with lime-based mortar matrices. The study subjected various specimens, including mortar triplets, bare textile specimens, FRCM coupons, and single-lap direct shear wallets, to thermal exposure. A novel approach utilizing embedded thermocouple sensors facilitated efficient monitoring and active control of the conditioning process. A shift in the failure modes was obtained in the single lap-direct shear tests, alongside a significant impact on tensile capacity for both textiles and FRCM coupons. Subsequently, bond tests results were used to indirectly calibrate an analytical approach based on mode-II fracture mechanics. A comparison between Cohesive Material Law (CML) functions at various temperatures was conducted for each of the three systems, demonstrating a good agreement between the analytical model and experimental curves. Furthermore, the durability in alkaline environment of two additional FRCM systems, characterized by basalt and glass fibre textiles with lime-based mortars, was studied through an extensive experimental campaign. Tests conducted on single yarn and textile specimens after exposure at different durations and temperatures revealed a significant impact on tensile capacity. Additionally, FRCM coupons manufactured with conditioned textile were tested to understand the influence of aged textile and curing environment on the final tensile behavior. These results contributed significantly to the existing knowledge on FRCM systems and could be used to develop a standardized alkaline testing protocol, still lacking in the scientific literature.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cleaning is one of the most important and delicate procedures that are part of the restoration process. When developing new systems, it is fundamental to consider its selectivity towards the layer to-be-removed, non-invasiveness towards the one to-be-preserved, its sustainability and non-toxicity. Besides assessing its efficacy, it is important to understand its mechanism by analytical protocols that strike a balance between cost, practicality, and reliable interpretation of results. In this thesis, the development of cleaning systems based on the coupling of electrospun fabrics (ES) and greener organic solvents is proposed. Electrospinning is a versatile technique that allows the production of micro/nanostructured non-woven mats, which have already been used as absorbents in various scientific fields, but to date, not in the restoration field. The systems produced proved to be effective for the removal of dammar varnish from paintings, where the ES not only act as solvent-binding agents but also as adsorbents towards the partially solubilised varnish due to capillary rise, thus enabling a one-step procedure. They have also been successfully applied for the removal of spray varnish from marble substrates and wall paintings. Due to the materials' complexity, the procedure had to be adapted case-by-case and mechanical action was still necessary. According to the spinning solution, three types of ES mats have been produced: polyamide 6,6, pullulan and pullulan with melanin nanoparticles. The latter, under irradiation, allows for a localised temperature increase accelerating and facilitating the removal of less soluble layers (e.g. reticulated alkyd-based paints). All the systems produced, and the mock-ups used were extensively characterised using multi-analytical protocols. Finally, a monitoring protocol and image treatment based on photoluminescence macro-imaging is proposed. This set-up allowed the study of the removal mechanism of dammar varnish and semi-quantify its residues. These initial results form the basis for optimising the acquisition set-up and data processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to study how explosive behavior and geophysical signals in a volcanic conduit are related to the development of overpressure in slug-driven eruptions. A first suite of laboratory experiments of gas slugs ascending in analogue conduits was performed. Slugs ascended into a range of analogue liquids and conduit diameters to allow proper scaling to the natural volcanoes. The geometrical variation of the slug in response to the explored variables was parameterised. Volume of gas slug and rheology of the liquid phase revealed the key parameters in controlling slug overpressure at bursting. Founded on these results, a theoretical model to calculate burst overpressure for slug-driven eruptions was developed. The dimensionless approach adopted allowed to apply the model to predict bursting pressure of slugs at Stromboli. Comparison of predicted values with measured data from Stromboli volcano showed that the model can explain the entire spectrum of observed eruptive styles at Stromboli – from low-energy puffing, through normal Strombolian eruptions, up to paroxysmal explosions – as manifestations of a single underlying physical process. Finally, another suite of laboratory experiments was performed to observe oscillatory pressure and forces variations generated during the expansion and bursting of gas slugs ascending in a conduit. Two end-member boundary conditions were imposed at the base of the pipe, simulating slug ascent in closed base (zero magma flux) and open base (constant flux) conduit. At the top of the pipe, a range of boundary conditions that are relevant at a volcanic vent were imposed, going from open to plugged vent. The results obtained illustrate that a change in boundary conditions in the conduit concur to affect the dynamic of slug expansion and burst: an upward flux at the base of the conduit attenuates the magnitude of the pressure transients, while a rheological stiffening in the top-most region of conduit changes dramatically the magnitude of the observed pressure transients, favoring a sudden, and more energetic pressure release into the overlying atmosphere. Finally, a discussion on the implication of changing boundary on the oscillatory processes generated at the volcanic scale is also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports an integrated analytical and physicochemical approach for the study of natural substances and new drugs based on mass spectrometry techniques combined with liquid chromatography. In particular, Chapter 1 concerns the study of Berberine a natural substance with pharmacological activity for the treatment of hepatobiliary and intestinal diseases. The first part focused on the relationships between physicochemical properties, pharmacokinetics and metabolism of Berberine and its metabolites. For this purpose a sensitive HPLC-ES-MS/MS method have been developed, validated and used to determine these compounds during their physicochemical properties studies and plasma levels of berberine and its metabolites including berberrubine(M1), demethylenberberine(M3), and jatrorrhizine(M4) in humans. Data show that M1, could have an efficient intestinal absorption by passive diffusion due to a keto-enol tautomerism confirmed by NMR studies and its higher plasma concentration. In the second part of Chapter 1, a comparison between M1 and BBR in vivo biodistribution in rat has been studied. In Chapter 2 a new HPLC-ES-MS/MS method for the simultaneous determination and quantification of glucosinolates, as glucoraphanin, glucoerucin and sinigrin, and isothiocyanates, as sulforaphane and erucin, has developed and validated. This method has been used for the analysis of functional foods enriched with vegetable extracts. Chapter 3 focused on a physicochemical study of the interaction between the bile acid sequestrants used in the treatment of hypercholesterolemia including colesevelam and cholestyramine with obeticolic acid (OCA), potent agonist of nuclear receptor farnesoid X (FXR). In particular, a new experimental model for the determination of equilibrium binding isotherm was developed. Chapter 4 focused on methodological aspects of new hard ionization coupled with liquid chromatography (Direct-EI-UHPLC-MS) not yet commercially available and potentially useful for qualitative analysis and for “transparent” molecules to soft ionization techniques. This method was applied to the analysis of several steroid derivatives.