967 resultados para Extraction methods
Resumo:
A direct, extraction-free spectrophotometric method has been developed for the determination of acebutolol hydrochloride (ABH) in pharmaceutical preparations. The method is based on ion-pair complex formation between the drug and two acidic dyes (sulphonaphthalein) namely bromocresol green (BCG) and bromothymol blue (BTB). Conformity to Beer's law enabled the assay of the drug in the range of 0.5-13.8 µg mL-1 with BCG and 1.8-15.9 µg mL-1 with BTB. Compared with a reference method, the results obtained were of equal accuracy and precision. In addition, these methods were also found to be specific for the analysis of acebutolol hydrochloride in the presence of excipients, which are co-formulated in the drug.
Resumo:
N-methylpyrrolidone is a powerful solvent for variety of chemical processes due to its vast chemical properties. It has been used in manufacturing processes of polymers, detergents, pharmaceuticals rubber and many more chemical substances. However, it creates large amount of residue in some of these processes which has to be dealt with. Many well known methods such as BASF in rubber producing units have tried to regenerate the solvent at the end of each run, however, there is still discarding of large amount of residue containing NMP, which over time, could cause environmental concerns. In this study, we have tried to optimize regeneration of the NMP extraction from butadiene production. It is shown that at higher temperatures NMP is separated from the residue with close to 90% efficiency, and the solvent residue proved to be the most effective with a 6: 1 ratio.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
The major type of non-cellulosic polysaccharides (hemicelluloses) in softwoods, the partly acetylated galactoglucomannans (GGMs), which comprise about 15% of spruce wood, have attracted growing interest because of their potential to become high-value products with applications in many areas. The main objective of this work was to explore the possibilities to extract galactoglucomannans in native, polymeric form in high yield from spruce wood with pressurised hot-water, and to obtain a deeper understanding of the process chemistry involved. Spruce (Picea abies) chips and ground wood particles were extracted using an accelerated solvent extractor (ASE) in the temperature range 160 – 180°C. Detailed chemical analyses were done on both the water extracts and the wood residues. As much as 80 – 90% of the GGMs in spruce wood, i.e. about 13% based on the original wood, could be extracted from ground spruce wood with pure water at 170 – 180°C with an extraction time of 60 min. GGMs comprised about 75% of the extracted carbohydrates and about 60% of the total dissolved solids. Other substances in the water extracts were xylans, arabinogalactans, pectins, lignin and acetic acid. The yields from chips were only about 60% of that from ground wood. Both the GGMs and other non-cellulosic polysaccharides were extensively hydrolysed at severe extraction conditions when pH dropped to the level of 3.5. Addition of sodium bicarbonate increased the yields of polymeric GGMs at low additions, 2.5 – 5 mM, where the end pH remained around 3.9. However, at higher addition levels the yields decreased, mainly because the acetyl groups in GGMs were split off, leading to a low solubility of GGMs. Extraction with buffered water in the pH range 3.8 – 4.4 gave similar yields as with plain water, but gave a higher yield of polymeric GGMs. Moreover, at these pH levels the hydrolysis of acetyl groups in GGMs was significantly inhibited. It was concluded that hot-water extraction of polymeric GGMs in good yields (up to 8% of wood) demands appropriate control of pH, in a narrow range about 4. These results were supported by a study of hydrolysis of GGM at constant pH in the range of 3.8 – 4.2 where a kinetic model for degradation of GGM was developed. The influence of wood particle size on hot-water extraction was studied with particles in the range of 0.1 – 2 mm. The smallest particles (< 0.1 mm) gave 20 – 40% higher total yield than the coarsest particles (1.25 – 2 mm). The difference was greatest at short extraction times. The results indicated that extraction of GGMs and other polysaccharides is limited mainly by the mass transfer in the fibre wall, and for coarse wood particles also in the wood matrix. Spruce sapwood, heartwood and thermomechnical pulp were also compared, but only small differences in yields and composition of extracts were found. Two methods for isolation and purification of polymeric GGMs, i.e. membrane filtration and precipitation in ethanol-water, were compared. Filtration through a series of membranes with different pore sizes separated GGMs of different molar masses, from polymers to oligomers. Polysaccharides with molar mass higher than 4 kDa were precipitated in ethanol-water. GGMs comprised about 80% of the precipitated polysaccharides. Other polysaccharides were mainly arabinoglucuronoxylans and pectins. The ethanol-precipitated GGMs were by 13C NMR spectroscopy verified to be very similar to GGMs extracted from spruce wood in low yield at a much lower temperature, 90°C. The obtained large body of experimental data could be utilised for further kinetic and economic calculations to optimise technical hot-water extractionof softwoods.
Resumo:
The objective of the present study was to test three different procedures for DNA extraction of Melipona quadrifasciata based on existing methods for DNA extraction of Apis, plants and fungi. These methods differ in the concentrations of specific substances in the extraction buffer. The results demonstrate that the method used for Apis is not adequate for DNA extraction from M. quadrifasciata. On the other hand, with minor modifications this method and the methods for plants and fungi were adequate for DNA extraction of this stingless bee, both for adults and larvae
Resumo:
The objective of the present study was to develop a simplified low cost method for the collection and fixation of pediatric autopsy cells and to determine the quantitative and qualitative adequacy of extracted DNA. Touch and scrape preparations of pediatric liver cells were obtained from 15 cadavers at autopsy and fixed in 95% ethanol or 3:1 methanol:acetic acid. Material prepared by each fixation procedure was submitted to DNA extraction with the Wizard® genomic DNA purification kit for DNA quantification and five of the preparations were amplified by multiplex PCR (azoospermia factor genes). The amount of DNA extracted varied from 20 to 8,640 µg, with significant differences between fixation methods. Scrape preparation fixed in 95% ethanol provided larger amount of extracted DNA. However, the mean for all groups was higher than the quantity needed for PCR (50 ng) or Southern blot (500 ng). There were no qualitative differences among the different material and fixatives. The same results were also obtained for glass slides stored at room temperature for 6, 12, 18 and 24 months. We conclude that touch and scrape preparations fixed in 95% ethanol are a good source of DNA and present fewer limitations than cell culture, tissue paraffin embedding or freezing that require sterile material, culture medium, laboratory equipment and trained technicians. In addition, they are more practical and less labor intensive and can be obtained and stored for a long time at low cost.
Resumo:
Milk and egg matrixes were assayed for aflatoxin M1 (AFM1) and B1 (AFB1) respectively, by AOAC official and modified methods with detection and quantification by thin layer chromatography (TLC) and high performance thin layer chromatography (HPTLC). The modified methods: Blanc followed by Romer, showed to be most appropriate for AFM1 analysis in milk. Both methods reduced emulsion formation, produced cleaner extracts, no streaking spots, precision and accuracy improved, especially when quantification was performed by HPTLC. The use of ternary mixture in the Blanc Method was advantageous as the solvent could extract AFM1 directly from the first stage (extraction), leaving other compounds in the binary mixture layer, avoiding emulsion formation, thus reducing toxin loss. The relative standard deviation (RSD%) values were low, 16 and 7% when TLC and HPTLC were used, with a mean recovery of 94 and 97%, respectively. As far as egg matrix and final extract are concerned, both methods evaluated for AFB1 need further studies. Although that matrix leads to emulsion with consequent loss of toxin, the Romer modified presented a reasonable clean extract (mean recovery of 92 and 96% for TLC and HPTLC, respectively). Most of the methods studied did not performed as expected mainly due to the matrixes high content of triglicerides (rich on saturated fatty acids), cholesterol, carotene and proteins. Although nowadays most methodology for AFM1 is based on HPLC, TLC determination (Blanc and Romer modified) for AFM1 and AFB1 is particularly recommended to those, inexperienced in food and feed mycotoxins analysis and especially who cannot afford to purchase sophisticated (HPLC,HPTLC) instrumentation.
Resumo:
Solvent extraction of calcium and magnesium impurities from a lithium-rich brine (Ca ~ 2,000 ppm, Mg ~ 50 ppm, Li ~ 30,000 ppm) was investigated using a continuous counter-current solvent extraction mixer-settler set-up. The literature review includes a general review about resources, demands and production methods of Li followed by basics of solvent extraction. Experimental section includes batch experiments for investigation of pH isotherms of three extractants; D2EHPA, Versatic 10 and LIX 984 with concentrations of 0.52, 0.53 and 0.50 M in kerosene respectively. Based on pH isotherms LIX 984 showed no affinity for solvent extraction of Mg and Ca at pH ≤ 8 while D2EHPA and Versatic 10 were effective in extraction of Ca and Mg. Based on constructed pH isotherms, loading isotherms of D2EHPA (at pH 3.5 and 3.9) and Versatic 10 (at pH 7 and 8) were further investigated. Furthermore based on McCabe-Thiele method, two extraction stages and one stripping stage (using HCl acid with concentration of 2 M for Versatic 10 and 3 M for D2EHPA) was practiced in continuous runs. Merits of Versatic 10 in comparison to D2EHPA are higher selectivity for Ca and Mg, faster phase disengagement, no detrimental change in viscosity due to shear amount of metal extraction and lower acidity in stripping. On the other hand D2EHPA has less aqueous solubility and is capable of removing Mg and Ca simultaneously even at higher Ca loading (A/O in continuous runs > 1). In general, shorter residence time (~ 2 min), lower temperature (~23 °C), lower pH values (6.5-7.0 for Versatic 10 and 3.5-3.7 for D2EHPA) and a moderately low A/O value (< 1:1) would cause removal of 100% of Ca and nearly 100% of Mg while keeping Li loss less than 4%, much lower than the conventional precipitation in which 20% of Li is lost.
Resumo:
The physiochemical and biological properties of honey are directly associated to its floral origin. Some current commonly used methods for identification of botanical origin of honey involve palynological analysis, chromatographic methods, or direct observation of the bee behavior. However, these methods can be less sensitive and time consuming. DNA-based methods have become popular due to their simplicity, quickness, and reliability. The main objective of this research is to introduce a protocol for the extraction of DNA from honey and demonstrate that the molecular analysis of the extracted DNA can be used for its botanical identification. The original CTAB-based protocol for the extraction of DNA from plants was modified and used in the DNA extraction from honey. DNA extraction was carried out from different honey samples with similar results in each replication. The extracted DNA was amplified by PCR using plant specific primers, confirming that the DNA extracted using the modified protocol is of plant origin and has good quality for analysis of PCR products and that it can be used for botanical identification of honey.
Resumo:
Climatic impacts of energy-peat extraction are of increasing concern due to EU emissions trading requirements. A new excavation-drier peat extraction method has been developed to reduce the climatic impact and increase the efficiency of peat extraction. To quantify and compare the soil GHG fluxes of the excavation drier and the traditional milling methods, as well as the areas from which the energy peat is planned to be extracted in the future (extraction reserve area types), soil CO2, CH4 and N2O fluxes were measured during 2006–2007 at three sites in Finland. Within each site, fluxes were measured from drained extraction reserve areas, extraction fields and stockpiles of both methods and additionally from the biomass driers of the excavation-drier method. The Life Cycle Assessment (LCA), described at a principal level in ISO Standards 14040:2006 and 14044:2006, was used to assess the long-term (100 years) climatic impact from peatland utilisation with respect to land use and energy production chains where utilisation of coal was replaced with peat. Coal was used as a reference since in many cases peat and coal can replace each other in same power plants. According to this study, the peat extraction method used was of lesser significance than the extraction reserve area type in regards to the climatic impact. However, the excavation-drier method seems to cause a slightly reduced climatic impact as compared with the prevailing milling method.
Resumo:
Several automated reversed-phase HPLC methods have been developed to determine trace concentrations of carbamate pesticides (which are of concern in Ontario environmental samples) in water by utilizing two solid sorbent extraction techniques. One of the methods is known as on-line pre-concentration'. This technique involves passing 100 milliliters of sample water through a 3 cm pre-column, packed with 5 micron ODS sorbent, at flow rates varying from 5-10 mUmin. By the use of a valve apparatus, the HPLC system is then switched to a gradient mobile phase program consisting of acetonitrile and water. The analytes, Propoxur, Carbofuran, Carbaryl, Propham, Captan, Chloropropham, Barban, and Butylate, which are pre-concentrated on the pre-column, are eluted and separated on a 25 cm C-8 analytical column and determined by UV absorption at 220 nm. The total analytical time is 60 minutes, and the pre-column can be used repeatedly for the analysis of as many as thirty samples. The method is highly sensitive as 100 percent of the analytes present in the sample can be injected into the HPLC. No breakthrough of any of the analytes was observed and the minimum detectable concentrations range from 10 to 480 ng/L. The developed method is totally automated for the analysis of one sample. When the above mobile phase is modified with a buffer solution, Aminocarb, Benomyl, and its degradation product, MBC, can also be detected along with the above pesticides with baseline resolution for all of the analytes. The method can also be easily modified to determine Benomyl and MBC both as solute and as particulate matter. By using a commercially available solid phase extraction cartridge, in lieu of a pre-column, for the extraction and concentration of analytes, a completely automated method has been developed with the aid of the Waters Millilab Workstation. Sample water is loaded at 10 mL/min through a cartridge and the concentrated analytes are eluted from the sorbent with acetonitrile. The resulting eluate is blown-down under nitrogen, made up to volume with water, and injected into the HPLC. The total analytical time is 90 minutes. Fifty percent of the analytes present in the sample can be injected into the HPLC, and recoveries for the above eight pesticides ranged from 84 to 93 percent. The minimum detectable concentrations range from 20 to 960 ng/L. The developed method is totally automated for the analysis of up to thirty consecutive samples. The method has proven to be applicable to both purer water samples as well as untreated lake water samples.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
Cette recherche porte sur la lexicologie, la lexicographie et l’enseignement/apprentissage du lexique. Elle s’inscrit dans le cadre du projet Modélisation ontologique des savoirs lexicographiques en vue de leur application en linguistique appliquée, surnommé Lexitation, qui est, à notre connaissance, la première tentative d’extraction des savoirs lexicographiques — i.e. connaissances déclaratives et procédurales utilisées par des lexicographes — utilisant une méthode expérimentale. Le projet repose sur le constat que les savoirs lexicographiques ont un rôle crucial à jouer en lexicologie, mais aussi en enseignement/apprentissage du lexique. Dans ce mémoire, nous décrirons les méthodes et les résultats de nos premières expérimentations, effectuées à l’aide du Think Aloud Protocol (Ericsson et Simon, 1993). Nous expliquerons l’organisation générale des expérimentations et comment les savoirs lexicographiques extraits sont modélisés pour former une ontologie. Finalement, nous discuterons des applications possibles de nos travaux en enseignement du lexique, plus particulièrement pour la formation des maîtres.
Resumo:
La documentation des programmes aide les développeurs à mieux comprendre le code source pendant les tâches de maintenance. Toutefois, la documentation n’est pas toujours disponible ou elle peut être de mauvaise qualité. Le recours à la redocumentation s’avère ainsi nécessaire. Dans ce contexte, nous proposons de faire la redocumentation en générant des commentaires par application de techniques de résumé par extraction. Pour mener à bien cette tâche, nous avons commencé par faire une étude empirique pour étudier les aspects quantitatifs et qualitatifs des commentaires. En particulier, nous nous sommes intéressés à l’étude de la distribution des commentaires par rapport aux différents types d’instructions et à la fréquence de documentation de chaque type. Aussi, nous avons proposé une taxonomie de commentaires pour classer les commentaires selon leur contenu et leur qualité. Suite aux résultats de l’étude empirique, nous avons décidé de résumer les classes Java par extraction des commentaires des méthodes/constructeurs. Nous avons défini plusieurs heuristiques pour déterminer les commentaires les plus pertinents à l’extraction. Ensuite, nous avons appliqué ces heuristiques sur les classes Java de trois projets pour en générer les résumés. Enfin, nous avons comparé les résumés produits (les commentaires produits) à des résumés références (les commentaires originaux) en utilisant la métrique ROUGE.