967 resultados para Extraction methods
Resumo:
Background Transformed cells of Escherichia coli DH5-α with pGFPuv, induced by IPTG (isopropyl-β-d-thiogalactopyranoside), express the green fluorescent protein (gfpuv) during growth phases. E. coli subjected to the combination of selective permeation by freezing/thawing/sonication cycles followed by the three-phase partitioning extraction (TPP) method were compared to the direct application of TPP to the same culture of E. coli on releasing gfpuv from the over-expressing cells. Material and Methods Cultures (37°C/100 rpm/ 24 h; μ = 0.99 h-1 - 1.10 h-1) of transformed (pGFP) Escherichia coli DH5-α, expressing the green fluorescent protein (gfpuv, absorbance at 394 nm and emission at 509 nm) were sonicated in successive intervals of sonication (25 vibrations/pulse) to determine the maximum amount of gfpuv released from the cells. For selective permeation, the transformed previously frozen (-75°C) cells were subjected to three freeze/thaw (-20°C/ 0.83°C/min) cycles interlaid by sonication (3 pulses/ 6 seconds/ 25 vibrations). The intracellular permeate with gfpuv in extraction buffer (TE) solution (25 mM Tris-HCl, pH 8.0, 1 mM β-mercaptoethanol β-ME, 0.1 mM PMSF) was subjected to the three-phase partitioning (TPP) method with t-butanol and 1.6 M ammonium sulfate. Sonication efficiency was verified on the application to the cells previously treated by the TPP method. The intra-cell releases were mixed and eluted through methyl HIC column with a buffer solution (10 mM Tris-HCl, 10 mM EDTA, pH 8.0). Results The sonication maximum released amount obtained from the cells was 327.67 μg gfpuv/mL (20.73 μg gfpuv/mg total proteins – BSA), after 9 min of treatment. Through the selective permeation by three repeated freezing/thawing/sonication cycles applied to the cells, a close content of 241.19 μg gfpuv/mL (29.74 μg gfpuv/mg BSA) was obtained. The specific mass range of gfpuv released from the same cultures, by the three-phase partitioning (TPP) method, in relation to total proteins, was higher, between 107.28 μg/mg and 135.10 μg/mg. Conclusions The selective permeation of gfpuv by freezing/thawing/sonication followed by TPP separation method was equivalent to the amount of gfpuv extracted from the cells directly by TPP; although selective permeation extracts showed better elution through the HIC column.
Resumo:
The objective of this study was to evaluate accuracy, precision and robustness of two methods to obtain silage samples, in comparison with extraction of liquor by manual screw-press. Wet brewery residue alone or combined with soybean hulls and citrus pulp were ensiled in laboratory silos. Liquor was extracted by a manual screw-press and a 2-mL aliquot was fixed with 0.4 mL formic acid. Two 10-g silage samples from each silo were diluted in 20 mL deionized water or 17% formic acid solution (alternative methods). Aliquots obtained by the three methods were used to determine the silage contents of fermentation end-products. The accuracy of the alternative methods was evaluated by comparing mean bias of estimates obtained by manual screw-press and by alternative methods, whereas precision was assessed by the root mean square prediction error and the residual error. Robustness was determined by studying the interaction between bias and chemical components, pH, in vitro dry matter digestibility (IVDMD) and buffer capacity. The 17% formic acid method was more accurate for estimating acetic, butyric and lactic acids, although it resulted in low overestimates of propionic acid and underestimates of ethanol. The deionized water method overestimated acetic and propionic acids and slightly underestimated ethanol. The 17% formic acid method was more precise than deionized water for estimating all organic acids and ethanol. The robustness of each method with respect to variation in the silage chemical composition, IVDMD and pH is dependent on the fermentation end-product at evaluation. The robustness of the alternative methods seems to be critical at the determination of lactic acid and ethanol contents.
Resumo:
This study aimed to test different protocols for the extraction of microbial DNA from the coral Mussismilia harttii. Four different commercial kits were tested, three of them based on methods for DNA extraction from soil (FastDNA SPIN Kit for soil, MP Bio, PowerSoil DNA Isolation Kit, MoBio, and ZR Soil Microbe DNA Kit, Zymo Research) and one kit for DNA extraction from plants (UltraClean Plant DNA Isolation Kit, MoBio). Five polyps of the same colony of M. harttii were macerated and aliquots were submitted to DNA extraction by the different kits. After extraction, the DNA was quantified and PCR-DGGE was used to study the molecular fingerprint of Bacteria and Eukarya. Among the four kits tested, the ZR Soil Microbe DNA Kit was the most efficient with respect to the amount of DNA extracted, yielding about three times more DNA than the other kits. Also, we observed a higher number and intensities of DGGE bands for both Bacteria and Eukarya with the same kit. Considering these results, we suggested that the ZR Soil Microbe DNA Kit is the best adapted for the study of the microbial communities of corals.
Resumo:
Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (â^ž), NLR (0.017), and Ef (99%).
Resumo:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
Drug abuse is a major global problem which has a strong impact not only on the single individual but also on the entire society. Among the different strategies that can be used to address this issue an important role is played by identification of abusers and proper medical treatment. This kind of therapy should be carefully monitored in order to discourage improper use of the medication and to tailor the dose according to the specific needs of the patient. Hence, reliable analytical methods are needed to reveal drug intake and to support physicians in the pharmacological management of drug dependence. In the present Ph.D. thesis original analytical methods for the determination of drugs with a potential for abuse and of substances used in the pharmacological treatment of drug addiction are presented. In particular, the work has been focused on the analysis of ketamine, naloxone and long-acting opioids (buprenorphine and methadone), oxycodone, disulfiram and bupropion in human plasma and in dried blood spots. The developed methods are based on the use of high performance liquid chromatography (HPLC) coupled to various kinds of detectors (mass spectrometer, coulometric detector, diode array detector). For biological sample pre-treatment different techniques have been exploited, namely solid phase extraction and microextraction by packed sorbent. All the presented methods have been validated according to official guidelines with good results and some of these have been successfully applied to the therapeutic drug monitoring of patients under treatment for drug abuse.
Resumo:
Except the article forming the main content most HTML documents on the WWW contain additional contents such as navigation menus, design elements or commercial banners. In the context of several applications it is necessary to draw the distinction between main and additional content automatically. Content extraction and template detection are the two approaches to solve this task. This thesis gives an extensive overview of existing algorithms from both areas. It contributes an objective way to measure and evaluate the performance of content extraction algorithms under different aspects. These evaluation measures allow to draw the first objective comparison of existing extraction solutions. The newly introduced content code blurring algorithm overcomes several drawbacks of previous approaches and proves to be the best content extraction algorithm at the moment. An analysis of methods to cluster web documents according to their underlying templates is the third major contribution of this thesis. In combination with a localised crawling process this clustering analysis can be used to automatically create sets of training documents for template detection algorithms. As the whole process can be automated it allows to perform template detection on a single document, thereby combining the advantages of single and multi document algorithms.
Resumo:
The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.
Resumo:
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path. Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST. Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization.
Resumo:
Over the past ten years, the cross-correlation of long-time series of ambient seismic noise (ASN) has been widely adopted to extract the surface-wave part of the Green’s Functions (GF). This stochastic procedure relies on the assumption that ASN wave-field is diffuse and stationary. At frequencies <1Hz, the ASN is mainly composed by surface-waves, whose origin is attributed to the sea-wave climate. Consequently, marked directional properties may be observed, which call for accurate investigation about location and temporal evolution of the ASN-sources before attempting any GF retrieval. Within this general context, this thesis is aimed at a thorough investigation about feasibility and robustness of the noise-based methods toward the imaging of complex geological structures at the local (∼10-50km) scale. The study focused on the analysis of an extended (11 months) seismological data set collected at the Larderello-Travale geothermal field (Italy), an area for which the underground geological structures are well-constrained thanks to decades of geothermal exploration. Focusing on the secondary microseism band (SM;f>0.1Hz), I first investigate the spectral features and the kinematic properties of the noise wavefield using beamforming analysis, highlighting a marked variability with time and frequency. For the 0.1-0.3Hz frequency band and during Spring- Summer-time, the SMs waves propagate with high apparent velocities and from well-defined directions, likely associated with ocean-storms in the south- ern hemisphere. Conversely, at frequencies >0.3Hz the distribution of back- azimuths is more scattered, thus indicating that this frequency-band is the most appropriate for the application of stochastic techniques. For this latter frequency interval, I tested two correlation-based methods, acting in the time (NCF) and frequency (modified-SPAC) domains, respectively yielding esti- mates of the group- and phase-velocity dispersions. Velocity data provided by the two methods are markedly discordant; comparison with independent geological and geophysical constraints suggests that NCF results are more robust and reliable.
Resumo:
This thesis work aims to develop original analytical methods for the determination of drugs with a potential for abuse, for the analysis of substances used in the pharmacological treatment of drug addiction in biological samples and for the monitoring of potentially toxic compounds added to street drugs. In fact reliable analytical techniques can play an important role in this setting. They can be employed to reveal drug intake, allowing the identification of drug users and to assess drug blood levels, assisting physicians in the management of the treatment. Pharmacological therapy needs to be carefully monitored indeed in order to optimize the dose scheduling according to the specific needs of the patient and to discourage improper use of the medication. In particular, different methods have been developed for the detection of gamma-hydroxybutiric acid (GHB), prescribed for the treatment of alcohol addiction, of glucocorticoids, one of the most abused pharmaceutical class to enhance sport performance and of adulterants, pharmacologically active compounds added to illicit drugs for recreational purposes. All the presented methods are based on capillary electrophoresis (CE) and high performance liquid chromatography (HPLC) coupled to various detectors (diode array detector, mass spectrometer). Biological samples pre-treatment was carried out using different extraction techniques, liquid-liquid extraction (LLE) and solid phase extraction (SPE). Different matrices have been considered: human plasma, dried blood spots, human urine, simulated street drugs. These developed analytical methods are individually described and discussed in this thesis work.
Resumo:
Eisbohrkerne stellen wertvolle Klimaarchive dar, da sie atmosphärisches Aerosol konservieren. Die Analyse chemischer Verbindungen als Bestandteil atmosphärischer Aerosole in Eisbohrkernen liefert wichtige Informationen über Umweltbedingungen und Klima der Vergangenheit. Zur Untersuchung der α-Dicarbonyle Glyoxal und Methylglyoxal in Eis- und Schneeproben wurde eine neue, sensitive Methode entwickelt, die die Stir Bar Sorptive Extraction (SBSE) mit der Hochleistungsflüssigchromatographie-Massenspektrometrie (HPLC-MS) kombiniert. Zur Analyse von Dicarbonsäuren in Eisbohrkernen wurde eine weitere Methode entwickelt, bei der die Festphasenextraktion mit starkem Anionenaustauscher zum Einsatz kommt. Die Methode erlaubt die Quantifizierung aliphatischer Dicarbonsäuren (≥ C6), einschließlich Pinsäure, sowie aromatischer Carbonsäuren (wie Phthalsäure und Vanillinsäure), wodurch die Bestimmung wichtiger Markerverbindungen für biogene und anthropogene Quellen ermöglicht wurde. Mit Hilfe der entwickelten Methoden wurde ein Eisbohrkern aus den Schweizer Alpen analysiert. Die ermittelten Konzentrationsverläufe der Analyten umfassen die Zeitspanne von 1942 bis 1993. Mittels einer Korrelations- und Hauptkomponentenanalyse konnte gezeigt werden, dass die organischen Verbindungen im Eis hauptsächlich durch Waldbrände und durch vom Menschen verursachte Schadstoffemissionen beeinflusst werden. Im Gegensatz dazu sind die Konzentrationsverläufe einiger Analyten auf den Mineralstaubtransport auf den Gletscher zurückzuführen. Zusätzlich wurde ein Screening der Eisbohrkernproben mittels ultrahochauflösender Massenspektrometrie durchgeführt. Zum ersten Mal wurden in diesem Rahmen auch Organosulfate und Nitrooxyorganosulfate in einem Eisbohrkern identifiziert.
Resumo:
In most pathology laboratories worldwide, formalin-fixed paraffin embedded (FFPE) samples are the only tissue specimens available for routine diagnostics. Although commercial kits for diagnostic molecular pathology testing are becoming available, most of the current diagnostic tests are laboratory-based assays. Thus, there is a need for standardized procedures in molecular pathology, starting from the extraction of nucleic acids. To evaluate the current methods for extracting nucleic acids from FFPE tissues, 13 European laboratories, participating to the European FP6 program IMPACTS (www.impactsnetwork.eu), isolated nucleic acids from four diagnostic FFPE tissues using their routine methods, followed by quality assessment. The DNA-extraction protocols ranged from homemade protocols to commercial kits. Except for one homemade protocol, the majority gave comparable results in terms of the quality of the extracted DNA measured by the ability to amplify differently sized control gene fragments by PCR. For array-applications or tests that require an accurately determined DNA-input, we recommend using silica based adsorption columns for DNA recovery. For RNA extractions, the best results were obtained using chromatography column based commercial kits, which resulted in the highest quantity and best assayable RNA. Quality testing using RT-PCR gave successful amplification of 200 bp-250 bp PCR products from most tested tissues. Modifications of the proteinase-K digestion time led to better results, even when commercial kits were applied. The results of the study emphasize the need for quality control of the nucleic acid extracts with standardised methods to prevent false negative results and to allow data comparison among different diagnostic laboratories.
Resumo:
Biliary cast syndrome (BCS) is the presence of casts within the intrahepatic or extrahepatic biliary system after orthotopic liver transplantation. Our work compares two percutaneous methods for BCS treatment: the mechanical cast-extraction technique (MCE) versus the hydraulic cast-extraction (HCE) technique using a rheolytic system.