27 resultados para Development potential
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In Bosnia Herzegovina the development of clear policy objectives and endorsement of a long-term, coherent and mutual agricultural and rural development policy have also been affected by structural problems: a lack of reliable information on population and other relevant issues, the absence of an adequate land registry system and cadastre. Moreover in BiH the agricultural and rural sectors are characterized by many factors that have typically affected transition countries such as land fragmentation, lack of agricultural mechanization and outdated production technologies, and rural aging, high unemployment and out-migration. In such a framework the condition and role of women in rural areas suffered for the lack of gender disaggregated data and a consequent poor information that lead to the exclusion of gender related questions in the agenda of public institutions and to the absence of targeted policy interventions. The aim of the research is to investigate the role and condition of women in the rural development process of Republic of Srpska and to analyze the capacity of extension services to stimulate their empowerment. Specific research questions include the status of women in the rural areas of Republic of Srpska, the role of government in fostering the empowerment of rural women, and the role of the extension service in supporting rural women. The methodology - inspired by the case study method developed by R. Yin - is designed along the three specific research questions that are used as building blocks. Each of the three research questions is investigated with a combination of methodological tools - including surveys, experts interviews and focus groups - aimed to overcome the lack of data and knowledge that characterize the research objectives.
Resumo:
In recent years we have witnessed important changes: the Second Quantum Revolution is in the spotlight of many countries, and it is creating a new generation of technologies. To unlock the potential of the Second Quantum Revolution, several countries have launched strategic plans and research programs that finance and set the pace of research and development of these new technologies (like the Quantum Flagship, the National Quantum Initiative Act and so on). The increasing pace of technological changes is also challenging science education and institutional systems, requiring them to help to prepare new generations of experts. This work is placed within physics education research and contributes to the challenge by developing an approach and a course about the Second Quantum Revolution. The aims are to promote quantum literacy and, in particular, to value from a cultural and educational perspective the Second Revolution. The dissertation is articulated in two parts. In the first, we unpack the Second Quantum Revolution from a cultural perspective and shed light on the main revolutionary aspects that are elevated to the rank of principles implemented in the design of a course for secondary school students, prospective and in-service teachers. The design process and the educational reconstruction of the activities are presented as well as the results of a pilot study conducted to investigate the impact of the approach on students' understanding and to gather feedback to refine and improve the instructional materials. The second part consists of the exploration of the Second Quantum Revolution as a context to introduce some basic concepts of quantum physics. We present the results of an implementation with secondary school students to investigate if and to what extent external representations could play any role to promote students’ understanding and acceptance of quantum physics as a personal reliable description of the world.
Resumo:
Two major types of B cells, the antibody-producing cells of the immune system, are classically distinguished in the spleen: marginal zone (MZ) and follicular (FO). In addition, FO B cells are subdivided into FO I and FO II cells, based on the amount of surface IgM. MZ B cells, which surround the splenic follicles, rapidly produce IgM in response to blood-borne pathogens without T cell help, while T cell-dependent production of high affinity, isotype-switched antibodies is ascribed to FO I cells. The significance of FO II cells and the mechanism underlying B cell fate choices are unclear. We showed that FO II cells express more Sca1 than FO I cells and originate from a distinct B cell development program, marked by high expression of Sca1. MZ B cells can derive from the “canonical” Sca1lo pathways, as well as from the Sca1hi program, although the Sca1hi program shows a stronger MZ bias than the Sca1lo program, and extensive phenotypic plasticity exists between MZ and FO II, but not between MZ and FO I cells. The Sca1hi program is induced by hematopoietic stress and generates B cells with an Igλ-enriched repertoire. In aged mice, the canonical B cell development pathway is impaired, while the Sca1hi program is increased. Furthermore, we showed that a population of unknown function, defined as Lin-c-kit+Sca1+ (LSK-), contains early lymphoid precursors, with primarily B cell potential in vivo. Our data suggest that LSK- cells may represent a distinct precursor for the Sca1hi program in the bone marrow.
Resumo:
The increasing aversion to technological risks of the society requires the development of inherently safer and environmentally friendlier processes, besides assuring the economic competitiveness of the industrial activities. The different forms of impact (e.g. environmental, economic and societal) are frequently characterized by conflicting reduction strategies and must be holistically taken into account in order to identify the optimal solutions in process design. Though the literature reports an extensive discussion of strategies and specific principles, quantitative assessment tools are required to identify the marginal improvements in alternative design options, to allow the trade-off among contradictory aspects and to prevent the “risk shift”. In the present work a set of integrated quantitative tools for design assessment (i.e. design support system) was developed. The tools were specifically dedicated to the implementation of sustainability and inherent safety in process and plant design activities, with respect to chemical and industrial processes in which substances dangerous for humans and environment are used or stored. The tools were mainly devoted to the application in the stages of “conceptual” and “basic design”, when the project is still open to changes (due to the large number of degrees of freedom) which may comprise of strategies to improve sustainability and inherent safety. The set of developed tools includes different phases of the design activities, all through the lifecycle of a project (inventories, process flow diagrams, preliminary plant lay-out plans). The development of such tools gives a substantial contribution to fill the present gap in the availability of sound supports for implementing safety and sustainability in early phases of process design. The proposed decision support system was based on the development of a set of leading key performance indicators (KPIs), which ensure the assessment of economic, societal and environmental impacts of a process (i.e. sustainability profile). The KPIs were based on impact models (also complex), but are easy and swift in the practical application. Their full evaluation is possible also starting from the limited data available during early process design. Innovative reference criteria were developed to compare and aggregate the KPIs on the basis of the actual sitespecific impact burden and the sustainability policy. Particular attention was devoted to the development of reliable criteria and tools for the assessment of inherent safety in different stages of the project lifecycle. The assessment follows an innovative approach in the analysis of inherent safety, based on both the calculation of the expected consequences of potential accidents and the evaluation of the hazards related to equipment. The methodology overrides several problems present in the previous methods proposed for quantitative inherent safety assessment (use of arbitrary indexes, subjective judgement, build-in assumptions, etc.). A specific procedure was defined for the assessment of the hazards related to the formations of undesired substances in chemical systems undergoing “out of control” conditions. In the assessment of layout plans, “ad hoc” tools were developed to account for the hazard of domino escalations and the safety economics. The effectiveness and value of the tools were demonstrated by the application to a large number of case studies concerning different kinds of design activities (choice of materials, design of the process, of the plant, of the layout) and different types of processes/plants (chemical industry, storage facilities, waste disposal). An experimental survey (analysis of the thermal stability of isomers of nitrobenzaldehyde) provided the input data necessary to demonstrate the method for inherent safety assessment of materials.
Resumo:
Participation appeared in development discourses for the first time in the 1970s, as a generic call for the involvement of the poor in development initiatives. Over the last three decades, the initial perspectives on participation intended as a project method for poverty reduction have evolved into a coherent and articulated theoretical elaboration, in which participation figures among the paraphernalia of good governance promotion: participation has acquired the status of “new orthodoxy”. Nevertheless, the experience of the implementation of participatory approaches in development projects seemed to be in the majority of cases rather disappointing, since the transformative potential of ‘participation in development’ depends on a series of factors in which every project can actually differ from others: the ultimate aim of the approach promoted, its forms and contents and, last but not least, the socio-political context in which the participatory initiative is embedded. In Egypt, the signature of a project agreement between the Arab Republic of Egypt and the Federal Republic of Germany, in 1998, inaugurated a Participatory Urban Management Programme (PUMP) to be implemented in Greater Cairo by the German Technical Cooperation (Deutsche Gesellschaft für Technische Zusammenarbeit, GTZ) and the Ministry of Planning (now Ministry of Local Development) and the Governorates of Giza and Cairo as the main counterparts. Now, ten years after the beginning of the PUMP/PDP and close to its end (December 2010), it is possible to draw some conclusions about the scope, the significance and the effects of the participatory approach adopted by GTZ and appropriated by the Egyptian counterparts in dealing with the issue of informal areas and, more generally, of urban development. Our analysis follows three sets of questions: the first set regards the way ‘participation’ has been interpreted and concretised by PUMP and PDP. The second is about the emancipating potential of the ‘participatory approach’ and its ability to ‘empower’ the ‘marginalised’. The third focuses on one hand on the efficacy of GTZ strategy to lead to an improvement of the delivery service in informal areas (especially in terms of planning and policies), and on the other hand on the potential of GTZ development intervention to trigger an incremental process of ‘democratisation’ from below.
Resumo:
Triplex cell vaccine is a cancer immunopreventive cell vaccine that can prevent almost completely mammary tumor onset in HER-2/neu transgenic mice. A future translation of cancer immunoprevention from preclinical to clinical studies should take into account several aspects. The work reported in this thesis deals with the study of three of these aspects: vaccine schedule, activity in a therapeutic set-up and second-generation DNA vaccines. An important element in determining human acceptance and compliance of a treatment protocol is the number of vaccinations. In order to improve the vaccination schedule a minimal protocol was searched, i.e. a schedule consisting of a lower number of administrations than standard protocol but with a similar efficacy. A candidate optimal protocol was identified by the use of an in silico model, SimTriplex simulator. The in vivo test of this schedule in HER-2/neu transgenic mice only partially confirmed in silico predictions. This result shows that in silico models have the potential ability to aid in searching of optimal treatment protocols, provided that they will be further tuned on experimental data. As a further result this preclinical study highlighted that kinetic of antibody response plays a major role in determining cancer prevention, leading to the hypothesis of a threshold that must be reached rapidly and maintained lifetime. Early clinical trials would be performed in a therapeutic, rather than preventive, setting. Thus, the activity of Triplex vaccine was investigated against experimental lung metastases in HER-2/neu transgenic mice in order to evaluate if the immunopreventive Triplex vaccine could be effective also against a pre-existing tumor mass. This preclinical model of aggressive metastatic development showed that the vaccine was an efficient treatment also 4 for the cure of micrometastases. However the immune mechanisms activated against tumor mass were not antibody dependent, i.e. different from those preventing the onset of primary mammary carcinoma. DNA vaccines could be more easily used than cellular ones. A second generation of Triplex vaccine based on DNA plasmids was evaluated in an aggressive preclinical model (BALBp53neu female mice) and compared with the preventive ability of cellular Triplex vaccine. It was observed that Triplex DNA vaccine was as effective as Triplex cell vaccine, exploiting a more restricted immune stimulation.
Resumo:
The main reasons for the attention focused on ceramics as possible structural materials are their wear resistance and the ability to operate with limited oxidation and ablation at temperatures above 2000°C. Hence, this work is devoted to the study of two classes of materials which can satisfy these requirements: silicon carbide -based ceramics (SiC) for wear applications and borides and carbides of transition metals for ultra-high temperatures applications (UHTCs). SiC-based materials: Silicon carbide is a hard ceramic, which finds applications in many industrial sectors, from heat production, to automotive engineering and metals processing. In view of new fields of uses, SiC-based ceramics were produced with addition of 10-30 vol% of MoSi2, in order to obtain electro conductive ceramics. MoSi2, indeed, is an intermetallic compound which possesses high temperature oxidation resistance, high electrical conductivity (21·10-6 Ω·cm), relatively low density (6.31 g/cm3), high melting point (2030°C) and high stiffness (440 GPa). The SiC-based ceramics were hot pressed at 1900°C with addition of Al2O3-Y2O3 or Y2O3-AlN as sintering additives. The microstructure of the composites and of the reference materials, SiC and MoSi2, were studied by means of conventional analytical techniques, such as X-ray diffraction (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (SEM-EDS). The composites showed a homogeneous microstructure, with good dispersion of the secondary phases and low residual porosity. The following thermo-mechanical properties of the SiC-based materials were measured: Vickers hardness (HV), Young’s modulus (E), fracture toughness (KIc) and room to high temperature flexural strength (σ). The mechanical properties of the composites were compared to those of two monolithic SiC and MoSi2 materials and resulted in a higher stiffness, fracture toughness and slightly higher flexural resistance. Tribological tests were also performed in two configurations disco-on-pin and slideron cylinder, aiming at studying the wear behaviour of SiC-MoSi2 composites with Al2O3 as counterfacing materials. The tests pointed out that the addition of MoSi2 was detrimental owing to a lower hardness in comparison with the pure SiC matrix. On the contrary, electrical measurements revealed that the addition of 30 vol% of MoSi2, rendered the composite electroconductive, lowering the electrical resistance of three orders of magnitude. Ultra High Temperature Ceramics: Carbides, borides and nitrides of transition metals (Ti, Zr, Hf, Ta, Nb, Mo) possess very high melting points and interesting engineering properties, such as high hardness (20-25 GPa), high stiffness (400-500 GPa), flexural strengths which remain unaltered from room temperature to 1500°C and excellent corrosion resistance in aggressive environment. All these properties place the UHTCs as potential candidates for the development of manoeuvrable hypersonic flight vehicles with sharp leading edges. To this scope Zr- and Hf- carbide and boride materials were produced with addition of 5-20 vol% of MoSi2. This secondary phase enabled the achievement of full dense composites at temperature lower than 2000°C and without the application of pressure. Besides the conventional microstructure analyses XRD and SEM-EDS, transmission electron microscopy (TEM) was employed to explore the microstructure on a small length scale to disclose the effective densification mechanisms. A thorough literature analysis revealed that neither detailed TEM work nor reports on densification mechanisms are available for this class of materials, which however are essential to optimize the sintering aids utilized and the processing parameters applied. Microstructural analyses, along with thermodynamics and crystallographic considerations, led to disclose of the effective role of MoSi2 during sintering of Zrand Hf- carbides and borides. Among the investigated mechanical properties (HV, E, KIc, σ from room temperature to 1500°C), the high temperature flexural strength was improved due to the protective and sealing effect of a silica-based glassy phase, especially for the borides. Nanoindentation tests were also performed on HfC-MoSi2 composites in order to extract hardness and elastic modulus of the single phases. Finally, arc jet tests on HfC- and HfB2-based composites confirmed the excellent oxidation behaviour of these materials under temperature exceeding 2000°C; no cracking or spallation occurred and the modified layer was only 80-90 μm thick.
Resumo:
This doctoral thesis focuses on ground-based measurements of stratospheric nitric acid (HNO3)concentrations obtained by means of the Ground-Based Millimeter-wave Spectrometer (GBMS). Pressure broadened HNO3 emission spectra are analyzed using a new inversion algorithm developed as part of this thesis work and the retrieved vertical profiles are extensively compared to satellite-based data. This comparison effort I carried out has a key role in establishing a long-term (1991-2010), global data record of stratospheric HNO3, with an expected impact on studies concerning ozone decline and recovery. The first part of this work is focused on the development of an ad hoc version of the Optimal Estimation Method (Rodgers, 2000) in order to retrieve HNO3 spectra observed by means of GBMS. I also performed a comparison between HNO3 vertical profiles retrieved with the OEM and those obtained with the old iterative Matrix Inversion method. Results show no significant differences in retrieved profiles and error estimates, with the OEM providing however additional information needed to better characterize the retrievals. A final section of this first part of the work is dedicated to a brief review on the application of the OEM to other trace gases observed by GBMS, namely O3 and N2O. The second part of this study deals with the validation of HNO3 profiles obtained with the new inversion method. The first step has been the validation of GBMS measurements of tropospheric opacity, which is a necessary tool in the calibration of any GBMS spectra. This was achieved by means of comparisons among correlative measurements of water vapor column content (or Precipitable Water Vapor, PWV) since, in the spectral region observed by GBMS, the tropospheric opacity is almost entirely due to water vapor absorption. In particular, I compared GBMS PWV measurements collected during the primary field campaign of the ECOWAR project (Bhawar et al., 2008) with simultaneous PWV observations obtained with Vaisala RS92k radiosondes, a Raman lidar, and an IR Fourier transform spectrometer. I found that GBMS PWV measurements are in good agreement with the other three data sets exhibiting a mean difference between observations of ~9%. After this initial validation, GBMS HNO3 retrievals have been compared to two sets of satellite data produced by the two NASA/JPL Microwave Limb Sounder (MLS) experiments (aboard the Upper Atmosphere Research Satellite (UARS) from 1991 to 1999, and on the Earth Observing System (EOS) Aura mission from 2004 to date). This part of my thesis is inserted in GOZCARDS (Global Ozone Chemistry and Related Trace gas Data Records for the Stratosphere), a multi-year project, aimed at developing a long-term data record of stratospheric constituents relevant to the issues of ozone decline and expected recovery. This data record will be based mainly on satellite-derived measurements but ground-based observations will be pivotal for assessing offsets between satellite data sets. Since the GBMS has been operated for more than 15 years, its nitric acid data record offers a unique opportunity for cross-calibrating HNO3 measurements from the two MLS experiments. I compare GBMS HNO3 measurements obtained from the Italian Alpine station of Testa Grigia (45.9° N, 7.7° E, elev. 3500 m), during the period February 2004 - March 2007, and from Thule Air Base, Greenland (76.5°N 68.8°W), during polar winter 2008/09, and Aura MLS observations. A similar intercomparison is made between UARS MLS HNO3 measurements with those carried out from the GBMS at South Pole, Antarctica (90°S), during the most part of 1993 and 1995. I assess systematic differences between GBMS and both UARS and Aura HNO3 data sets at seven potential temperature levels. Results show that, except for measurements carried out at Thule, ground based and satellite data sets are consistent within the errors, at all potential temperature levels.
Resumo:
The hierarchical organisation of biological systems plays a crucial role in the pattern formation of gene expression resulting from the morphogenetic processes, where autonomous internal dynamics of cells, as well as cell-to-cell interactions through membranes, are responsible for the emergent peculiar structures of the individual phenotype. Being able to reproduce the systems dynamics at different levels of such a hierarchy might be very useful for studying such a complex phenomenon of self-organisation. The idea is to model the phenomenon in terms of a large and dynamic network of compartments, where the interplay between inter-compartment and intra-compartment events determines the emergent behaviour resulting in the formation of spatial patterns. According to these premises the thesis proposes a review of the different approaches already developed in modelling developmental biology problems, as well as the main models and infrastructures available in literature for modelling biological systems, analysing their capabilities in tackling multi-compartment / multi-level models. The thesis then introduces a practical framework, MS-BioNET, for modelling and simulating these scenarios exploiting the potential of multi-level dynamics. This is based on (i) a computational model featuring networks of compartments and an enhanced model of chemical reaction addressing molecule transfer, (ii) a logic-oriented language to flexibly specify complex simulation scenarios, and (iii) a simulation engine based on the many-species/many-channels optimised version of Gillespie’s direct method. The thesis finally proposes the adoption of the agent-based model as an approach capable of capture multi-level dynamics. To overcome the problem of parameter tuning in the model, the simulators are supplied with a module for parameter optimisation. The task is defined as an optimisation problem over the parameter space in which the objective function to be minimised is the distance between the output of the simulator and a target one. The problem is tackled with a metaheuristic algorithm. As an example of application of the MS-BioNET framework and of the agent-based model, a model of the first stages of Drosophila Melanogaster development is realised. The model goal is to generate the early spatial pattern of gap gene expression. The correctness of the models is shown comparing the simulation results with real data of gene expression with spatial and temporal resolution, acquired in free on-line sources.
Resumo:
The subject of this thesis is multicolour bioluminescence analysis and how it can provide new tools for drug discovery and development.The mechanism of color tuning in bioluminescent reactions is not fully understood yet but it is object of intense research and several hypothesis have been generated. In the past decade key residues of the active site of the enzyme or in the surface surrounding the active site have been identified as responsible of different color emission. Anyway since bioluminescence reaction is strictly dependent from the interaction between the enzyme and its substrate D-luciferin, modification of the substrate can lead to a different emission spectrum too. In the recent years firefly luciferase and other luciferases underwent mutagenesis in order to obtain mutants with different emission characteristics. Thanks to these new discoveries in the bioluminescence field multicolour luciferases can be nowadays employed in bioanalysis for assay developments and imaging purposes. The use of multicolor bioluminescent enzymes expanded the potential of a range of application in vitro and in vivo. Multiple analysis and more information can be obtained from the same analytical session saving cost and time. This thesis focuses on several application of multicolour bioluminescence for high-throughput screening and in vivo imaging. Multicolor luciferases can be employed as new tools for drug discovery and developments and some examples are provided in the different chapters. New red codon optimized luciferase have been demonstrated to be improved tools for bioluminescence imaging in small animal and the possibility to combine red and green luciferases for BLI has been achieved even if some aspects of the methodology remain challenging and need further improvement. In vivo Bioluminescence imaging has known a rapid progress since its first application no more than 15 years ago. It is becoming an indispensable tool in pharmacological research. At the same time the development of more sensitive and implemented microscopes and low-light imager for a better visualization and quantification of multicolor signals would boost the research and the discoveries in life sciences in general and in drug discovery and development in particular.
Resumo:
In two Italian sites, multiaxis trees slightly reduced primary axis length and secondary axis length of newly grafted trees, and increased the number of secondary shoots. The total length, node production, and total dry matter gain were proportional to the number of axis. Growth of both primary and secondary shoots, and dry matter accumulation, have been found to be also well related to rootstock vigour. A great variability in axillary shoot production was recorded among different environments. Grafted trees had higher primary growth, secondary axis growth, and dry matter gain than chip budded trees. Stem water potential measured in the second year after grafting was not affected by rootstocks or number of leaders. Measurements performed in New Zealand (Hawke’s Bay) during the second year after grafting revealed that both final length and growth rate of primary and secondary axis were related to the rootstock rather than to the training system. Dwarfing rootstocks reduced the number of long vegetative shoots and increased the proportion of less vigorous shoots.
Resumo:
Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
Perfluoroalkylated substances are a group of chemicals that have been largely employed during the last 60 years in several applications, widely spreading and accumulating in the environment due to their extreme resistance to degradation. As a consequence, they have been found also in various types of food as well as in drinking water, proving that they can easily reach humans through the diet. The available information concerning their adverse effects on health has recently increased the interest towards these contaminants and highlighted the importance of investigating all the potential sources of human exposure, among which diet was proved to be the most relevant. This need has been underlined by the European Union through Recommendation 2010/161/EU: in this document, Member States were called to monitor their presence of in food, producing accurate estimations of human exposure. The purpose of the research presented in this thesis, which is the result of a partnership between an Italian and a French laboratory, was to develop reliable tools for the analysis of these pollutants in food, to be used for generating data on potentially contaminated matrices. An efficient method based on liquid chromatography-mass spectrometry for the detection of 16 different perfluorinated compounds in milk has been validated in accordance with current European regulation guidelines (2002/657/EC) and is currently under evaluation for ISO 17025 accreditation. The proposed technique was applied to cow, powder and human breast milk samples from Italy and France to produce a preliminary monitoring on the presence of these contaminants. In accordance with the above mentioned European Recommendation, this project led also to the development of a promising technique for the quantification of some precursors of these substances in fish. This method showed extremely satisfying performances in terms of linearity and limits of detection, and will be useful for future surveys.
Resumo:
Drug abuse is a major global problem which has a strong impact not only on the single individual but also on the entire society. Among the different strategies that can be used to address this issue an important role is played by identification of abusers and proper medical treatment. This kind of therapy should be carefully monitored in order to discourage improper use of the medication and to tailor the dose according to the specific needs of the patient. Hence, reliable analytical methods are needed to reveal drug intake and to support physicians in the pharmacological management of drug dependence. In the present Ph.D. thesis original analytical methods for the determination of drugs with a potential for abuse and of substances used in the pharmacological treatment of drug addiction are presented. In particular, the work has been focused on the analysis of ketamine, naloxone and long-acting opioids (buprenorphine and methadone), oxycodone, disulfiram and bupropion in human plasma and in dried blood spots. The developed methods are based on the use of high performance liquid chromatography (HPLC) coupled to various kinds of detectors (mass spectrometer, coulometric detector, diode array detector). For biological sample pre-treatment different techniques have been exploited, namely solid phase extraction and microextraction by packed sorbent. All the presented methods have been validated according to official guidelines with good results and some of these have been successfully applied to the therapeutic drug monitoring of patients under treatment for drug abuse.