859 resultados para WORK METHODS
Resumo:
Drug abuse is a major global problem which has a strong impact not only on the single individual but also on the entire society. Among the different strategies that can be used to address this issue an important role is played by identification of abusers and proper medical treatment. This kind of therapy should be carefully monitored in order to discourage improper use of the medication and to tailor the dose according to the specific needs of the patient. Hence, reliable analytical methods are needed to reveal drug intake and to support physicians in the pharmacological management of drug dependence. In the present Ph.D. thesis original analytical methods for the determination of drugs with a potential for abuse and of substances used in the pharmacological treatment of drug addiction are presented. In particular, the work has been focused on the analysis of ketamine, naloxone and long-acting opioids (buprenorphine and methadone), oxycodone, disulfiram and bupropion in human plasma and in dried blood spots. The developed methods are based on the use of high performance liquid chromatography (HPLC) coupled to various kinds of detectors (mass spectrometer, coulometric detector, diode array detector). For biological sample pre-treatment different techniques have been exploited, namely solid phase extraction and microextraction by packed sorbent. All the presented methods have been validated according to official guidelines with good results and some of these have been successfully applied to the therapeutic drug monitoring of patients under treatment for drug abuse.
Resumo:
For the safety assessment of radioactive waste, the possibility of radionuclide migration has to be considered. Since Np (and also Th due to the long-lived 232-Th) will be responsible for the greatest amount of radioactivity one million years after discharge from the reactor, its (im)-mobilization in the geosphere is of great importance. Furthermore, the chemistry of Np(V) is quite similar (but not identical) to the chemistry of Pu(V). Three species of neptunium may be found in the near field of the waste disposal, but pentavalent neptunium is the most abundant species under a wide range of natural conditions. Within this work, the interaction of Np(V) with the clay mineral montmorillonite and melanodins (as model substances for humic acids) was studied. The sorption of neptunium onto gibbsite, a model clay for montmorillonite, was also investigated. The sorption of neptunium onto γ-alumina and montmorillonite was studied in a parallel doctoral work by S. Dierking. Neptunium is only found in ultra trace amounts in the environment. Therefore, sensitive and specific methods are needed for its determination. The sorption was determined by γ spectroscopy and LSC for the whole concentration range studied. In addition the combination of these techniques with ultrafiltration allowed the study of Np(V) complexation with melanoidins. Regrettably, the available speciation methods (e.g. CE-ICP-MS and EXAFS) are not capable to detect the environmentally relevant neptunium concentrations. Therefore, a combination of batch experiments and speciation analyses was performed. Further, the preparation of hybrid clay-based materials (HCM) montmorillonitemelanoidins for sorption studies was achieved. The formation of hybrid materials begins in the interlayers of the montmorillonite, and then the organic material spreads over the surface of the mineral. The sorption of Np onto HCM was studied at the environmentally relevant concentrations and the results obtained were compared with those predicted by the linear additive model by Samadfam. The sorption of neptunium onto gibbsite was studied in batch experiments and the sorption maximum determined at pH~8.5. The sorption isotherm pointed to the presence of strong and weak sorption sites in gibbsite. The Np speciation was studied by using EXAFS, which showed that the sorbed species was Np(V). The influence of M42 type melanodins on the sorption of Np(V) onto montmorillonite was also investigated at pH 7. The sorption of the melanoidins was affected by the order in which the components were added and by ionic strength. The sorption of Np was affected by ionic strength, pointing to outer sphere sorption, whereas the presence of increasing amounts of melanoidins had little influence on Np sorption.
Resumo:
For the detection of hidden objects by low-frequency electromagnetic imaging the Linear Sampling Method works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfills the assumptions for the fully justified variant of the Linear Sampling Method, the so-called Factorization Method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
In this work, new tools in atmospheric pollutant sampling and analysis were applied in order to go deeper in source apportionment study. The project was developed mainly by the study of atmospheric emission sources in a suburban area influenced by a municipal solid waste incinerator (MSWI), a medium-sized coastal tourist town and a motorway. Two main research lines were followed. For what concerns the first line, the potentiality of the use of PM samplers coupled with a wind select sensor was assessed. Results showed that they may be a valid support in source apportionment studies. However, meteorological and territorial conditions could strongly affect the results. Moreover, new markers were investigated, particularly focusing on the processes of biomass burning. OC revealed a good biomass combustion process indicator, as well as all determined organic compounds. Among metals, lead and aluminium are well related to the biomass combustion. Surprisingly PM was not enriched of potassium during bonfire event. The second research line consists on the application of Positive Matrix factorization (PMF), a new statistical tool in data analysis. This new technique was applied to datasets which refer to different time resolution data. PMF application to atmospheric deposition fluxes identified six main sources affecting the area. The incinerator’s relative contribution seemed to be negligible. PMF analysis was then applied to PM2.5 collected with samplers coupled with a wind select sensor. The higher number of determined environmental indicators allowed to obtain more detailed results on the sources affecting the area. Vehicular traffic revealed the source of greatest concern for the study area. Also in this case, incinerator’s relative contribution seemed to be negligible. Finally, the application of PMF analysis to hourly aerosol data demonstrated that the higher the temporal resolution of the data was, the more the source profiles were close to the real one.
Resumo:
Most of the problems in modern structural design can be described with a set of equation; solutions of these mathematical models can lead the engineer and designer to get info during the design stage. The same holds true for physical-chemistry; this branch of chemistry uses mathematics and physics in order to explain real chemical phenomena. In this work two extremely different chemical processes will be studied; the dynamic of an artificial molecular motor and the generation and propagation of the nervous signals between excitable cells and tissues like neurons and axons. These two processes, in spite of their chemical and physical differences, can be both described successfully by partial differential equations, that are, respectively the Fokker-Planck equation and the Hodgkin and Huxley model. With the aid of an advanced engineering software these two processes have been modeled and simulated in order to extract a lot of physical informations about them and to predict a lot of properties that can be, in future, extremely useful during the design stage of both molecular motors and devices which rely their actions on the nervous communications between active fibres.
Resumo:
Foodborne diseases impact human health and economies worldwide in terms of health care and productivity loss. Prevention is necessary and methods to detect, isolate and quantify foodborne pathogens play a fundamental role, changing continuously to face microorganisms and food production evolution. Official methods are mainly based on microorganisms growth in different media and their isolation on selective agars followed by confirmation of presumptive colonies through biochemical and serological test. A complete identification requires form 7 to 10 days. Over the last decades, new molecular techniques based on antibodies and nucleic acids allow a more accurate typing and a faster detection and quantification. The present thesis aims to apply molecular techniques to improve official methods performances regarding two pathogens: Shiga-like Toxin-producing Escherichia coli (STEC) and Listeria monocytogenes. In 2011, a new strain of STEC belonging to the serogroup O104 provoked a large outbreak. Therefore, the development of a method to detect and isolate STEC O104 is demanded. The first objective of this work is the detection, isolation and identification of STEC O104 in sprouts artificially contaminated. Multiplex PCR assays and antibodies anti-O104 incorporated in reagents for immunomagnetic separation and latex agglutination were employed. Contamination levels of less than 1 CFU/g were detected. Multiplex PCR assays permitted a rapid screening of enriched food samples and identification of isolated colonies. Immunomagnetic separation and latex agglutination allowed a high sensitivity and rapid identification of O104 antigen, respectively. The development of a rapid method to detect and quantify Listeria monocytogenes, a high-risk pathogen, is the second objective. Detection of 1 CFU/ml and quantification of 10–1,000 CFU/ml in raw milk were achieved by a sample pretreatment step and quantitative PCR in about 3h. L. monocytogenes growth in raw milk was also evaluated.
Resumo:
This work deals with the car sequencing (CS) problem, a combinatorial optimization problem for sequencing mixed-model assembly lines. The aim is to find a production sequence for different variants of a common base product, such that work overload of the respective line operators is avoided or minimized. The variants are distinguished by certain options (e.g., sun roof yes/no) and, therefore, require different processing times at the stations of the line. CS introduces a so-called sequencing rule H:N for each option, which restricts the occurrence of this option to at most H in any N consecutive variants. It seeks for a sequence that leads to no or a minimum number of sequencing rule violations. In this work, CS’ suitability for workload-oriented sequencing is analyzed. Therefore, its solution quality is compared in experiments to the related mixed-model sequencing problem. A new sequencing rule generation approach as well as a new lower bound for the problem are presented. Different exact and heuristic solution methods for CS are developed and their efficiency is shown in experiments. Furthermore, CS is adjusted and applied to a resequencing problem with pull-off tables.
Resumo:
Small molecules affecting biological processes in plants are widely used in agricultural practice as herbicides or plant growth regulators and in basic plant sciences as probes to study the physiology of plants. Most of the compounds were identified in large screens by the agrochemical industry, as phytoactive natural products and more recently, novel phytoactive compounds originated from academic research by chemical screens performed to induce specific phenotypes of interest. The aim of the present PhD thesis is to evaluate different approaches used for the identification of the primary mode of action (MoA) of a phytoactive compound. Based on the methodologies used for MoA identification, three approaches are discerned: a phenotyping approach, an approach based on a genetic screen and a biochemical screening approach.rnFour scientific publications resulting from my work are presented as examples of how a phenotyping approach can successfully be applied to describe the plant MoA of different compounds in detail.rnI. A subgroup of cyanoacrylates has been discovered as plant growth inhibitors. A set of bioassays indicated a specific effect on cell division. Cytological investigations of the cell division process in plant cell cultures, studies of microtubule assembly with green fluorescent protein marker lines in vivo and cross resistant studies with Eleusine indica plants harbouring a mutation in alpha-tubulin, led to the description of alpha-tubulin as a target site of cyanoacrylates (Tresch et al., 2005).rnII. The MoA of the herbicide flamprop-m-methyl was not known so far. The studies described in Tresch et al. (2008) indicate a primary effect on cell division. Detailed studies unravelled a specific effect on mitotic microtubule figures, causing a block in cell division. In contrast to other inhibitors of microtubule rearrangement such as dinitroanilines, flamprop-m-methyl did not influence microtubule assembly in vitro. An influence of flamprop-m-methyl on a target within the cytoskeleton signalling network could be proposed (Tresch et al., 2008).rnIII. The herbicide endothall is a protein phosphatase inhibitor structurally related to the natural product cantharidin. Bioassay studies indicated a dominant effect on dark-growing cells that was unrelated to effects observed in the light. Cytological characterisation of the microtubule cytoskeleton in corn tissue and heterotrophic tobacco cells showed a specific effect of endothall on mitotic spindle formation and ultrastructure of the nucleus in combination with a decrease of the proliferation index. The observed effects are similar to those of other protein phosphatase inhibitors such as cantharidin and the structurally different okadaic acid. Additionally, the observed effects show similarities to knock-out lines of the TON1 pathway, a protein phosphatase-regulated signalling pathway. The data presented in Tresch et al. (2011) associate endothall’s known in vitro inhibition of protein phosphatases with in vivo-effects and suggest an interaction between endothall and the TON1 pathway.rnIV. Mefluidide as a plant growth regulator induces growth retardation and a specific phenotype indicating an inhibition of fatty acid biosynthesis. A test of the cuticle functionality suggested a defect in the biosynthesis of very-long-chain fatty acids (VLCFA) or waxes. Metabolic profiling studies showed similarities with different groups of VLCFA synthesis inhibitors. Detailed analyses of VLCFA composition in tissues of duckweed (Lemna paucicostata) indicated a specific inhibition of the known herbicide target 3 ketoacyl-CoA synthase (KCS). Inhibitor studies using a yeast expression system established for plant KCS proteins verified the potency of mefluidide as an inhibitor of plant KCS enzymes. It could be shown that the strength of inhibition varied for different KCS homologues. The Arabidopsis Cer6 protein, which induces a plant growth phenotype similar to mefluidide when knocked out, was one of the most sensitive KCS enzymes (Tresch et al., 2012).rnThe findings of my own work were combined with other publications reporting a successful identification of the MoA and primary target proteins of different compounds or compound classes.rnA revised three-tier approach for the MoA identification of phytoactive compounds is proposed. The approach consists of a 1st level aiming to address compound stability, uniformity of effects in different species, general cytotoxicity and the effect on common processes like transcription and translation. Based on these findings advanced studies can be defined to start the 2nd level of MoA characterisation, either with further phenotypic characterisation, starting a genetic screen or establishing a biochemical screen. At the 3rd level, enzyme assays or protein affinity studies should show the activity of the compound on the hypothesized target and should associate the in vitro effects with the in vivo profile of the compound.
Resumo:
In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.
Resumo:
This thesis work aims to develop original analytical methods for the determination of drugs with a potential for abuse, for the analysis of substances used in the pharmacological treatment of drug addiction in biological samples and for the monitoring of potentially toxic compounds added to street drugs. In fact reliable analytical techniques can play an important role in this setting. They can be employed to reveal drug intake, allowing the identification of drug users and to assess drug blood levels, assisting physicians in the management of the treatment. Pharmacological therapy needs to be carefully monitored indeed in order to optimize the dose scheduling according to the specific needs of the patient and to discourage improper use of the medication. In particular, different methods have been developed for the detection of gamma-hydroxybutiric acid (GHB), prescribed for the treatment of alcohol addiction, of glucocorticoids, one of the most abused pharmaceutical class to enhance sport performance and of adulterants, pharmacologically active compounds added to illicit drugs for recreational purposes. All the presented methods are based on capillary electrophoresis (CE) and high performance liquid chromatography (HPLC) coupled to various detectors (diode array detector, mass spectrometer). Biological samples pre-treatment was carried out using different extraction techniques, liquid-liquid extraction (LLE) and solid phase extraction (SPE). Different matrices have been considered: human plasma, dried blood spots, human urine, simulated street drugs. These developed analytical methods are individually described and discussed in this thesis work.
Resumo:
The goal of this thesis is the acceleration of numerical calculations of QCD observables, both at leading order and next–to–leading order in the coupling constant. In particular, the optimization of helicity and spin summation in the context of VEGAS Monte Carlo algorithms is investigated. In the literature, two such methods are mentioned but without detailed analyses. Only one of these methods can be used at next–to–leading order. This work presents a total of five different methods that replace the helicity sums with a Monte Carlo integration. This integration can be combined with the existing phase space integral, in the hope that this causes less overhead than the complete summation. For three of these methods, an extension to existing subtraction terms is developed which is required to enable next–to–leading order calculations. All methods are analyzed with respect to efficiency, accuracy, and ease of implementation before they are compared with each other. In this process, one method shows clear advantages in relation to all others.
Resumo:
Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".
Resumo:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Resumo:
In this work we study a polyenergetic and multimaterial model for the breast image reconstruction in Digital Tomosynthesis, taking into consideration the variety of the materials forming the object and the polyenergetic nature of the X-rays beam. The modelling of the problem leads to the resolution of a high-dimensional nonlinear least-squares problem that, due to its nature of inverse ill-posed problem, needs some kind of regularization. We test two main classes of methods: the Levenberg-Marquardt method (together with the Conjugate Gradient method for the computation of the descent direction) and two limited-memory BFGS-like methods (L-BFGS). We perform some experiments for different values of the regularization parameter (constant or varying at each iteration), tolerances and stop conditions. Finally, we analyse the performance of the several methods comparing relative errors, iterations number, times and the qualities of the reconstructed images.