10 resultados para analytical approach

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Drying oils, and in particular linseed oil, were the most common binding media employed in painting between XVI and XIX centuries. Artists usually operated some pre-treatments on the oils to obtain binders with modified properties, such as different handling qualities or colour. Oil processing has a key role on the subsequent ageing of and degradation of linseed oil paints. In this thesis a multi-analytical approach was adopted to investigate the drying, polymerization and oxidative degradation of the linseed oil paints. In particular, thermogravimetry analysis (TGA), yielding information on the macromolecular scale, were compared with gas-chromatography mass-spectrometry (GC-MS) and direct exposure mass spectrometry (DEMS) providing information on the molecular scale. The study was performed on linseed oils and paint reconstructions prepared according to an accurate historical description of the painting techniques of the 19th century. TGA revealed that during ageing the molecular weight of the oils changes and that higher molecular weight fractions formed. TGA proved to be an excellent tool to compare the oils and paint reconstructions. This technique is able to highlight the different physical behaviour of oils that were processed using different methods and of paint layers on the basis of the different processed oil and /or the pigment used. GC/MS and DE-MS were used to characterise the soluble and non-polymeric fraction of the oils and paint reconstructions. GC/MS allowed us to calculate the ratios of palmitic to stearic acid (P/S), and azelaic to palmitic acid (A/P) and to evaluate effects produced by oil pre-treatments and the presence of different pigments. This helps to understand the role of the pre-treatments and of the pigments on the oxidative degradation undergone by siccative oils during ageing. DE-MS enabled the various molecular weight fractions of the samples to be simultaneously studied, and thus helped to highlight the presence of oxidation and hydrolysis reactions, and the formation of carboxylates that occur during ageing and with the changing of the oil pre-treatments and the pigments. The combination of thermal analysis with molecular techniques such as GC-MS, DEMS and FTIR enabled a model to be developed, for unravelling some crucial issues: 1) how oil pre-treatments produce binders with different physical-chemical qualities, and how this can influence the ageing of an oil paint film; 2) which is the role of the interaction between oil and pigments in the ageing and degradation process.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This thesis reports an integrated analytical approach for the study of physicochemical and biological properties of new synthetic bile acid (BA) analogues agonists of FXR and TGR5 receptors. Structure-activity data were compared with those previous obtained using the same experimental protocols on synthetic and natural occurring BA. The new synthetic BA analogues are classified in different groups according also to their potency as a FXR and TGR5 agonists: unconjugated and steroid modified BA and side chain modified BA including taurine or glycine conjugates and pseudo-conjugates (sulphonate and sulphate analogues). In order to investigate the relationship between structure and activity the synthetic analogues where admitted to a physicochemical characterization and to a preliminary screening for their pharmacokinetic and metabolism using a bile fistula rat model. Sensitive and accurate analytical methods have been developed for the quali-quantitative analysis of BA in biological fluids and sample used for physicochemical studies. Combined High Performance Liquid Chromatography Electrospray tandem mass spectrometry with efficient chromatographic separation of all studied BA and their metabolites have been optimized and validated. Analytical strategies for the identification of the BA and their minor metabolites have been developed. Taurine and glycine conjugates were identified in MS/MS by monitoring the specific ion transitions in multiple reaction monitoring (MRM) mode while all other metabolites (sulphate, glucuronic acid, dehydroxylated, decarboxylated or oxo) were monitored in a selected-ion reaction (SIR) mode with a negative ESI interface by the following ions. Accurate and precise data where achieved regarding the main physicochemical properties including solubility, detergency, lipophilicity and albumin binding . These studies have shown that minor structural modification greatly affect the pharmacokinetics and metabolism of the new analogues in respect to the natural BA and on turn their site of action, particularly where their receptor are located in the enterohepatic circulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to study how explosive behavior and geophysical signals in a volcanic conduit are related to the development of overpressure in slug-driven eruptions. A first suite of laboratory experiments of gas slugs ascending in analogue conduits was performed. Slugs ascended into a range of analogue liquids and conduit diameters to allow proper scaling to the natural volcanoes. The geometrical variation of the slug in response to the explored variables was parameterised. Volume of gas slug and rheology of the liquid phase revealed the key parameters in controlling slug overpressure at bursting. Founded on these results, a theoretical model to calculate burst overpressure for slug-driven eruptions was developed. The dimensionless approach adopted allowed to apply the model to predict bursting pressure of slugs at Stromboli. Comparison of predicted values with measured data from Stromboli volcano showed that the model can explain the entire spectrum of observed eruptive styles at Stromboli – from low-energy puffing, through normal Strombolian eruptions, up to paroxysmal explosions – as manifestations of a single underlying physical process. Finally, another suite of laboratory experiments was performed to observe oscillatory pressure and forces variations generated during the expansion and bursting of gas slugs ascending in a conduit. Two end-member boundary conditions were imposed at the base of the pipe, simulating slug ascent in closed base (zero magma flux) and open base (constant flux) conduit. At the top of the pipe, a range of boundary conditions that are relevant at a volcanic vent were imposed, going from open to plugged vent. The results obtained illustrate that a change in boundary conditions in the conduit concur to affect the dynamic of slug expansion and burst: an upward flux at the base of the conduit attenuates the magnitude of the pressure transients, while a rheological stiffening in the top-most region of conduit changes dramatically the magnitude of the observed pressure transients, favoring a sudden, and more energetic pressure release into the overlying atmosphere. Finally, a discussion on the implication of changing boundary on the oscillatory processes generated at the volcanic scale is also given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports an integrated analytical and physicochemical approach for the study of natural substances and new drugs based on mass spectrometry techniques combined with liquid chromatography. In particular, Chapter 1 concerns the study of Berberine a natural substance with pharmacological activity for the treatment of hepatobiliary and intestinal diseases. The first part focused on the relationships between physicochemical properties, pharmacokinetics and metabolism of Berberine and its metabolites. For this purpose a sensitive HPLC-ES-MS/MS method have been developed, validated and used to determine these compounds during their physicochemical properties studies and plasma levels of berberine and its metabolites including berberrubine(M1), demethylenberberine(M3), and jatrorrhizine(M4) in humans. Data show that M1, could have an efficient intestinal absorption by passive diffusion due to a keto-enol tautomerism confirmed by NMR studies and its higher plasma concentration. In the second part of Chapter 1, a comparison between M1 and BBR in vivo biodistribution in rat has been studied. In Chapter 2 a new HPLC-ES-MS/MS method for the simultaneous determination and quantification of glucosinolates, as glucoraphanin, glucoerucin and sinigrin, and isothiocyanates, as sulforaphane and erucin, has developed and validated. This method has been used for the analysis of functional foods enriched with vegetable extracts. Chapter 3 focused on a physicochemical study of the interaction between the bile acid sequestrants used in the treatment of hypercholesterolemia including colesevelam and cholestyramine with obeticolic acid (OCA), potent agonist of nuclear receptor farnesoid X (FXR). In particular, a new experimental model for the determination of equilibrium binding isotherm was developed. Chapter 4 focused on methodological aspects of new hard ionization coupled with liquid chromatography (Direct-EI-UHPLC-MS) not yet commercially available and potentially useful for qualitative analysis and for “transparent” molecules to soft ionization techniques. This method was applied to the analysis of several steroid derivatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, new advances in the development of spectroscopic based methods for the characterization of heritage materials have been achieved. As concern FTIR spectroscopy new approaches aimed at exploiting near and far IR region for the characterization of inorganic or organic materials have been tested. Paint cross-section have been analysed by FTIR spectroscopy in the NIR range and an “ad hoc” chemometric approach has been developed for the elaboration of hyperspectral maps. Moreover, a new method for the characterization of calcite based on the use of grinding curves has been set up both in MIR and in FAR region. Indeed, calcite is a material widely applied in cultural heritage, and this spectroscopic approach is an efficient and rapid tool to distinguish between different calcite samples. Different enhanced vibrational techniques for the characterisation of dyed fibres have been tested. First a SEIRA (Surface Enhanced Infra-Red Absorption) protocol has been optimised allowing the analysis of colorant micro-extracts thanks to the enhancement produced by the addition of gold nanoparticles. These preliminary studies permitted to identify a new enhanced FTIR method, named ATR/RAIRS, which allowed to reach lower detection limits. Regarding Raman microscopy, the research followed two lines, which have in common the aim of avoiding the use of colloidal solutions. AgI based supports obtained after deposition on a gold-coated glass slides have been developed and tested spotting colorant solutions. A SERS spectrum can be obtained thanks to the photoreduction, which the laser may induce on the silver salt. Moreover, these supports can be used for the TLC separation of a mixture of colorants and the analyses by means of both Raman/SERS and ATR-RAIRS can be successfully reached. Finally, a photoreduction method for the “on fiber” analysis of colorant without the need of any extraction have been optimised.