959 resultados para CMF, molecular cloud, extraction algorithm
Resumo:
Determination of chlorine using the molecular absorption of aluminum mono-chloride (AlCl) at the 261.418 nm wavelength was accomplished by high-resolution continuum source molecular absorption spectrometry using a transversely heated graphite tube furnace with an integrated platform. For the analysis. 10 mu L of the sample followed by 10 mu L of a solution containing Al-Ag-Sr modifier, (1 g L-1 each), were directly injected onto the platform. A spectral interference due to the use of Al-Ag-Sr as mixed modifier was easily corrected by the least-squares algorithm present in the spectrometer software. The pyrolysis and vaporization temperatures were 500 degrees C and 2200 degrees C, respectively. To evaluate the feasibility of a simple procedure for the determination of chlorine in food samples present in our daily lives, two different digestion methods were applied, namely (A) an acid digestion method using HNO3 only at room temperature, and (B) a digestion method with Ag, HNO3 and H2O2, where chlorine is precipitated as a low-solubility salt (AgCl), which is then dissolved with ammonia solution. The experimental results obtained with method B were in good agreement with the certified values and demonstrated that the proposed method is more accurate than method A. This is because the formation of silver chloride prevented analyte losses by volatilization. The limit of detection (LOD, 3 sigma/s) for Cl in methods A and B was 18 mu g g(-1) and 9 mu g g(-1), respectively, 1.7 and 3.3 times lower compared to published work using inductively coupled plasma optical emission spectrometry, and absolute LODs were 2.4 and 1.2 ng, respectively. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Insect cuticular hydrocarbons including relatively non-volatile chemicals play important roles in cuticle protection and chemical communication. The conventional procedures for extracting cuticular compounds from insects require toxic solvents, or non-destructive techniques that do not allow storage of subsequent samples, such as the use of SPME fibers. In this study, we describe and tested a non-lethal process for extracting cuticular hydrocarbons with styrene-divinylbenzene copolymers, and illustrate the method with two species of bees and one species of beetle. The results demonstrate that these compounds can be efficiently trapped by ChromosorbA (R) (SUPELCO) and that this method can be used as an alternative to existing methods.
Resumo:
Abstract Background Identification of nontuberculous mycobacteria (NTM) based on phenotypic tests is time-consuming, labor-intensive, expensive and often provides erroneous or inconclusive results. In the molecular method referred to as PRA-hsp65, a fragment of the hsp65 gene is amplified by PCR and then analyzed by restriction digest; this rapid approach offers the promise of accurate, cost-effective species identification. The aim of this study was to determine whether species identification of NTM using PRA-hsp65 is sufficiently reliable to serve as the routine methodology in a reference laboratory. Results A total of 434 NTM isolates were obtained from 5019 cultures submitted to the Institute Adolpho Lutz, Sao Paulo Brazil, between January 2000 and January 2001. Species identification was performed for all isolates using conventional phenotypic methods and PRA-hsp65. For isolates for which these methods gave discordant results, definitive species identification was obtained by sequencing a 441 bp fragment of hsp65. Phenotypic evaluation and PRA-hsp65 were concordant for 321 (74%) isolates. These assignments were presumed to be correct. For the remaining 113 discordant isolates, definitive identification was based on sequencing a 441 bp fragment of hsp65. PRA-hsp65 identified 30 isolates with hsp65 alleles representing 13 previously unreported PRA-hsp65 patterns. Overall, species identification by PRA-hsp65 was significantly more accurate than by phenotype methods (392 (90.3%) vs. 338 (77.9%), respectively; p < .0001, Fisher's test). Among the 333 isolates representing the most common pathogenic species, PRA-hsp65 provided an incorrect result for only 1.2%. Conclusion PRA-hsp65 is a rapid and highly reliable method and deserves consideration by any clinical microbiology laboratory charged with performing species identification of NTM.
Resumo:
Abstract Background The development of protocols for RNA extraction from paraffin-embedded samples facilitates gene expression studies on archival samples with known clinical outcome. Older samples are particularly valuable because they are associated with longer clinical follow up. RNA extracted from formalin-fixed paraffin-embedded (FFPE) tissue is problematic due to chemical modifications and continued degradation over time. We compared quantity and quality of RNA extracted by four different protocols from 14 ten year old and 14 recently archived (three to ten months old) FFPE breast cancer tissues. Using three spin column purification-based protocols and one magnetic bead-based protocol, total RNA was extracted in triplicate, generating 336 RNA extraction experiments. RNA fragment size was assayed by reverse transcription-polymerase chain reaction (RT-PCR) for the housekeeping gene glucose-6-phosphate dehydrogenase (G6PD), testing primer sets designed to target RNA fragment sizes of 67 bp, 151 bp, and 242 bp. Results Biologically useful RNA (minimum RNA integrity number, RIN, 1.4) was extracted in at least one of three attempts of each protocol in 86–100% of older and 100% of recently archived ("months old") samples. Short RNA fragments up to 151 bp were assayable by RT-PCR for G6PD in all ten year old and months old tissues tested, but none of the ten year old and only 43% of months old samples showed amplification if the targeted fragment was 242 bp. Conclusion All protocols extracted RNA from ten year old FFPE samples with a minimum RIN of 1.4. Gene expression of G6PD could be measured in all samples, old and recent, using RT-PCR primers designed for RNA fragments up to 151 bp. RNA quality from ten year old FFPE samples was similar to that extracted from months old samples, but quantity and success rate were generally higher for the months old group. We preferred the magnetic bead-based protocol because of its speed and higher quantity of extracted RNA, although it produced similar quality RNA to other protocols. If a chosen protocol fails to extract biologically useful RNA from a given sample in a first attempt, another attempt and then another protocol should be tried before excluding the case from molecular analysis.
Resumo:
This study aimed to test different protocols for the extraction of microbial DNA from the coral Mussismilia harttii. Four different commercial kits were tested, three of them based on methods for DNA extraction from soil (FastDNA SPIN Kit for soil, MP Bio, PowerSoil DNA Isolation Kit, MoBio, and ZR Soil Microbe DNA Kit, Zymo Research) and one kit for DNA extraction from plants (UltraClean Plant DNA Isolation Kit, MoBio). Five polyps of the same colony of M. harttii were macerated and aliquots were submitted to DNA extraction by the different kits. After extraction, the DNA was quantified and PCR-DGGE was used to study the molecular fingerprint of Bacteria and Eukarya. Among the four kits tested, the ZR Soil Microbe DNA Kit was the most efficient with respect to the amount of DNA extracted, yielding about three times more DNA than the other kits. Also, we observed a higher number and intensities of DGGE bands for both Bacteria and Eukarya with the same kit. Considering these results, we suggested that the ZR Soil Microbe DNA Kit is the best adapted for the study of the microbial communities of corals.
Resumo:
Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (â^ž), NLR (0.017), and Ef (99%).
Resumo:
The automatic extraction of biometric descriptors of anonymous people is a challenging scenario in camera networks. This task is typically accomplished making use of visual information. Calibrated RGBD sensors make possible the extraction of point cloud information. We present a novel approach for people semantic description and re-identification using the individual point cloud information. The proposal combines the use of simple geometric features with point cloud features based on surface normals.
Resumo:
Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.
Resumo:
The study of protein expression profiles for biomarker discovery in serum and in mammalian cell populations needs the continuous improvement and combination of proteins/peptides separation techniques, mass spectrometry, statistical and bioinformatic approaches. In this thesis work two different mass spectrometry-based protein profiling strategies have been developed and applied to liver and inflammatory bowel diseases (IBDs) for the discovery of new biomarkers. The first of them, based on bulk solid-phase extraction combined with matrix-assisted laser desorption/ionization - Time of Flight mass spectrometry (MALDI-TOF MS) and chemometric analysis of serum samples, was applied to the study of serum protein expression profiles both in IBDs (Crohn’s disease and ulcerative colitis) and in liver diseases (cirrhosis, hepatocellular carcinoma, viral hepatitis). The approach allowed the enrichment of serum proteins/peptides due to the high interaction surface between analytes and solid phase and the high recovery due to the elution step performed directly on the MALDI-target plate. Furthermore the use of chemometric algorithm for the selection of the variables with higher discriminant power permitted to evaluate patterns of 20-30 proteins involved in the differentiation and classification of serum samples from healthy donors and diseased patients. These proteins profiles permit to discriminate among the pathologies with an optimum classification and prediction abilities. In particular in the study of inflammatory bowel diseases, after the analysis using C18 of 129 serum samples from healthy donors and Crohn’s disease, ulcerative colitis and inflammatory controls patients, a 90.7% of classification ability and a 72.9% prediction ability were obtained. In the study of liver diseases (hepatocellular carcinoma, viral hepatitis and cirrhosis) a 80.6% of prediction ability was achieved using IDA-Cu(II) as extraction procedure. The identification of the selected proteins by MALDITOF/ TOF MS analysis or by their selective enrichment followed by enzymatic digestion and MS/MS analysis may give useful information in order to identify new biomarkers involved in the diseases. The second mass spectrometry-based protein profiling strategy developed was based on a label-free liquid chromatography electrospray ionization quadrupole - time of flight differential analysis approach (LC ESI-QTOF MS), combined with targeted MS/MS analysis of only identified differences. The strategy was used for biomarker discovery in IBDs, and in particular of Crohn’s disease. The enriched serum peptidome and the subcellular fractions of intestinal epithelial cells (IECs) from healthy donors and Crohn’s disease patients were analysed. The combining of the low molecular weight serum proteins enrichment step and the LCMS approach allowed to evaluate a pattern of peptides derived from specific exoprotease activity in the coagulation and complement activation pathways. Among these peptides, particularly interesting was the discovery of clusters of peptides from fibrinopeptide A, Apolipoprotein E and A4, and complement C3 and C4. Further studies need to be performed to evaluate the specificity of these clusters and validate the results, in order to develop a rapid serum diagnostic test. The analysis by label-free LC ESI-QTOF MS differential analysis of the subcellular fractions of IECs from Crohn’s disease patients and healthy donors permitted to find many proteins that could be involved in the inflammation process. Among them heat shock protein 70, tryptase alpha-1 precursor and proteins whose upregulation can be explained by the increased activity of IECs in Crohn’s disease were identified. Follow-up studies for the validation of the results and the in-depth investigation of the inflammation pathways involved in the disease will be performed. Both the developed mass spectrometry-based protein profiling strategies have been proved to be useful tools for the discovery of disease biomarkers that need to be validated in further studies.
Resumo:
Phylogeography is a recent field of biological research that links phylogenetics to biogeography through deciphering the imprint that evolutionary history has left on the genetic structure of extant populations. During the cold phases of the successive ice ages, which drastically shaped species’ distributions since the Pliocene, populations of numerous species were isolated in refugia where many of them evolved into different genetic lineages. My dissertation deals with the phylogeography of the Woodland Ringlet (Erebia medusa [Denis and Schiffermüller] 1775) in Central and Eastern Europe. This Palaearctic butterfly species is currently distributed from central France and south eastern Belgium over large parts of Central Europe and southern Siberia to the Pacific. It is absent from those parts of Europe with mediterranean, oceanic and boreal climates. It was supposed to be a Siberian faunal element with a rather homogeneous population structure in Central Europe due to its postglacial expansion out of a single eastern refugium. An already existing evolutionary scenario for the Woodland Ringlet in Central and Eastern Europe is based on nuclear data (allozymes). To know if this is corroborated by organelle evolutionary history, I sequenced two mitochondrial markers (part of the cytochrome oxydase subunit one and the control region) for populations sampled over the same area. Phylogeography largely relies on the construction of networks of uniparentally inherited haplotypes that are compared to geographic haplotype distribution thanks to recent developed methods such as nested clade phylogeographic analysis (NCPA). Several ring-shaped ambiguities (loops) emerged from both haplotype networks in E. medusa. They can be attributed to recombination and homoplasy. Such loops usually avert the straightforward extraction of the phylogeographic signal contained in a gene tree. I developed several new approaches to extract phylogeographic information in the presence of loops, considering either homoplasy or recombination. This allowed me to deduce a consistent evolutionary history for the species from the mitochondrial data and also adds plausibility for the occurrence of recombination in E. medusa mitochondria. Despite the fact that the control region is assumed to have a lack of resolving power in other species, I found a considerable genetic variation of this marker in E. medusa which makes it a useful tool for phylogeographic studies. In combination with the allozyme data, the mitochondrial genome supports the following phylogeographic scenario for E. medusa in Europe: (i) a first vicariance, due to the onset of the Würm glaciation, led to the formation of several major lineages, and is mirrored in the NCPA by restricted gene flow, (ii) later on further vicariances led to the formation of two sub-lineages in the Western lineage and two sub-lineages in the Eastern lineage during the Last Glacial Maximum or Older Dryas; additionally the NCPA supports a restriction of gene flow with isolation by distance, (iii) finally, vicariance resulted in two secondary sub-lineages in the area of Germany and, maybe, to two other secondary sub-lineages in the Czech Republic. The last postglacial warming was accompanied by strong range expansions in most of the genetic lineages. The scenario expected for a presumably Siberian faunal element such as E. medusa is a continuous loss of genetic diversity during postglacial westward expansion. Hence, the pattern found in this thesis contradicts a typical Siberian origin of E. medusa. In contrast, it corroboratess the importance of multiple extra-Mediterranean refugia for European fauna as it was recently assumed for other continental species.
Resumo:
Complex networks analysis is a very popular topic in computer science. Unfortunately this networks, extracted from different contexts, are usually very large and the analysis may be very complicated: computation of metrics on these structures could be very complex. Among all metrics we analyse the extraction of subnetworks called communities: they are groups of nodes that probably play the same role within the whole structure. Communities extraction is an interesting operation in many different fields (biology, economics,...). In this work we present a parallel community detection algorithm that can operate on networks with huge number of nodes and edges. After an introduction to graph theory and high performance computing, we will explain our design strategies and our implementation. Then, we will show some performance evaluation made on a distributed memory architectures i.e. the supercomputer IBM-BlueGene/Q "Fermi" at the CINECA supercomputing center, Italy, and we will comment our results.
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.
Resumo:
Background. Hhereditary cystic kidney diseases are a heterogeneous spectrum of disorders leading to renal failure. Clinical features and family history can help to distinguish the recessive from dominant diseases but the differential diagnosis is difficult due the phenotypic overlap. The molecular diagnosis is often the only way to characterize the different forms. A conventional molecular screening is suitable for small genes but is expensive and time-consuming for large size genes. Next Generation Sequencing (NGS) technologies enables massively parallel sequencing of nucleic acid fragments. Purpose. The first purpose was to validate a diagnostic algorithm useful to drive the genetic screening. The second aim was to validate a NGS protocol of PKHD1 gene. Methods. DNAs from 50 patients were submitted to conventional screening of NPHP1, NPHP5, UMOD, REN and HNF1B genes. 5 patients with known mutations in PKHD1 were submitted to NGS to validate the new method and a not genotyped proband with his parents were analyzed for a diagnostic application. Results. The conventional molecular screening detected 8 mutations: 1) the novel p.E48K of REN in a patient with cystic nephropathy, hyperuricemia, hyperkalemia and anemia; 2) p.R489X of NPHP5 in a patient with Senior Loken Syndrome; 3) pR295C of HNF1B in a patient with renal failure and diabetes.; 4) the NPHP1 deletion in 3 patients with medullar cysts; 5) the HNF1B deletion in a patient with medullar cysts and renal hypoplasia and in a diabetic patient with liver disease. The NGS of PKHD1 detected all known mutations and two additional variants during the validation. The diagnostic NGS analysis identified the patient’s compound heterozygosity with a maternal frameshift mutation and a paternal missense mutation besides a not transmitted paternal missense mutation. Conclusions. The results confirm the validity of our diagnostic algorithm and suggest the possibility to introduce this NGS protocol to clinical practice.
Resumo:
In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.