11 resultados para Collection development (Libraries)--Ireland
em Université de Lausanne, Switzerland
Resumo:
Hemorrhagic fevers caused by arenaviruses are among the most devastating emerging human diseases. Considering the number of individuals affected, the current lack of a licensed vaccine, and the limited therapeutic options, arenaviruses are arguably among the most neglected tropical pathogens and the development of efficacious anti-arenaviral drugs is of high priority. Over the past years significant efforts have been undertaken to identify novel potent inhibitors of arenavirus infection. High throughput screening of small molecule libraries employing pseudotype platforms led to the discovery of several potent and broadly active inhibitors of arenavirus cell entry that are effective against the major hemorrhagic arenaviruses. Mechanistic studies revealed that these novel entry inhibitors block arenavirus membrane fusion and provided novel insights into the unusual mechanism of this process. The success of these approaches highlights the power of small molecule screens in antiviral drug discovery and establishes arenavirus membrane fusion as a robust drug target. These broad screenings have been complemented by strategies targeting cellular factors involved in productive arenavirus infection. Approaches targeting the cellular protease implicated in maturation of the fusion-active viral envelope glycoprotein identified the proteolytic processing of the arenavirus glycoprotein precursor as a novel and promising target for anti-arenaviral strategies.
Resumo:
In order to study the various health influencing parameters related to engineered nanoparticles as well as to soot emitted b diesel engines, there is an urgent need for appropriate sampling devices and methods for cell exposure studies that simulate the respiratory system and facilitate associated biological and toxicological tests. The objective of the present work was the further advancement of a Multiculture Exposure Chamber (MEC) into a dose-controlled system for efficient delivery of nanoparticles to cells. It was validated with various types of nanoparticles (diesel engine soot aggregates, engineered nanoparticles for various applications) and with state-of-the-art nanoparticle measurement instrumentation to assess the local deposition of nanoparticles on the cell cultures. The dose of nanoparticles to which cell cultures are being exposed was evaluated in the normal operation of the in vitro cell culture exposure chamber based on measurements of the size specific nanoparticle collection efficiency of a cell free device. The average efficiency in delivering nanoparticles in the MEC was approximately 82%. The nanoparticle deposition was demonstrated by Transmission Electron Microscopy (TEM). Analysis and design of the MEC employs Computational Fluid Dynamics (CFD) and true to geometry representations of nanoparticles with the aim to assess the uniformity of nanoparticle deposition among the culture wells. Final testing of the dose-controlled cell exposure system was performed by exposing A549 lung cell cultures to fluorescently labeled nanoparticles. Delivery of aerosolized nanoparticles was demonstrated by visualization of the nanoparticle fluorescence in the cell cultures following exposure. Also monitored was the potential of the aerosolized nanoparticles to generate reactive oxygen species (ROS) (e.g. free radicals and peroxides generation), thus expressing the oxidative stress of the cells which can cause extensive cellular damage or damage on DNA.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Les cellules CD8? T cytolytiques (CTL) sont les principaux effecteurs du système immunitaire adaptatif contre les infections et les tumeurs. La récente identification d?antigènes tumoraux humains reconnus par des cellules T cytolytiques est la base pour le, développement des vaccins antigène spécifiques contre le cancer. Le nombre d?antigènes tumoraux reconnus par des CTL que puisse être utilisé comme cible pour la vaccination des patients atteints du cancer est encore limité. Une nouvelle technique, simple et rapide, vient d?être proposée pour l?identification d?antigènes reconnus par des CTL. Elle se base sur l?utilisation de librairies combinatoriales de peptides arrangées en un format de "scanning" ou balayage par position (PS-SCL). La première partie de cette étude a consisté à valider cette nouvelle technique par une analyse détaillée de la reconnaissance des PS-SCL par différents clones de CTL spécifiques pour des antigènes associés à la tumeur (TAA) connus ainsi que par des clones de spécificité inconnue. Les résultats de ces analyses révèlent que pour tous les clones, la plupart des acides aminés qui composent la séquence du peptide antigénique naturel ont été identifiés par l?utilisation des PS-SCL. Les résultats obtenus ont permis d?identifier des peptides analogues ayant une antigènicité augmentée par rapport au peptide naturel, ainsi que des peptides comportant de multiples modifications de séquence, mais présentant la même réactivité que le peptide naturel. La deuxième partie de cette étude a consisté à effectuer des analyses biométriques des résultats complexes générés par la PS-SCL. Cette approche a permis l?identification des séquences correspondant aux épitopes naturels à partir de bases de données de peptides publiques. Parmi des milliers de peptides, les séquences naturelles se trouvent comprises dans les 30 séquences ayant les scores potentiels de stimulation les plus élevés pour chaque TAA étudié. Mais plus important encore, l?utilisation des PS-SCL avec un clone réactif contre des cellules tumorales mais de spécificité inconnue nous a permis d?identifier I?epitope reconnu par ce clone. Les données présentées ici encouragent l?utilisation des PS-SCL pour l?identification et l?optimisation d?épitopes pour des CTL réactifs anti-tumoraux, ainsi que pour l?étude de la reconnaissance dégénérée d?antigènes par les CTL.<br/><br/>CD8+ cytolytic T lymphocytes (CTL) are the main effector cells of the adaptive immune system against infection and tumors. The recent identification of moleculariy defined human tumor Ags recognized by autologous CTL has opened new opportunities for the development of Ag-specific cancer vaccines. Despite extensive work, however, the number of CTL-defined tumor Ags that are suitable targets for the vaccination of cancer patients is still limited, especially because of the laborious and time consuming nature of the procedures currentiy used for their identification. The use of combinatorial peptide libraries in positionai scanning format (Positional Scanning Synthetic Combinatorial Libraries, PS-SCL)' has recently been proposed as an alternative approach for the identification of these epitopes. To validate this approach, we analyzed in detail the recognition of PS-SCL by tumor-reactive CTL clones specific for multiple well-defined tumor-associated Ags (TAA) as well as by tumor-reactive CTL clones of unknown specificity. The results of these analyses revealed that for all the TAA-specific clones studied most of the amino acids composing the native antigenic peptide sequences could be identified through the use of PS-SCL. Based on the data obtained from the screening of PS-SCL, we could design peptide analogs of increased antigenicity as well as cross-reactive analog peptides containing multiple amino acid substitutions. In addition, the resuits of PS-SCL-screening combined with a recently developed biometric data analysis (PS-SCL-based biometric database analysis) allowed the identification of the native peptides in public protein databases among the 30 most active sequences, and this was the case for all the TAA studied. More importantiy, the screening of PS- SCL with a tumor-reactive CTL clone of unknown specificity resulted in the identification of the actual epitope. Overall, these data encourage the use of PS-SCL not oniy for the identification and optimization of tumor-associated CTL epitopes, but also for the analysis of degeneracy in T lymphocyte receptor (TCR) recognition of tumor Ags.<br/><br/>Les cellules T CD8? cytolytiques font partie des globules blancs du sang et sont les principales responsables de la lutte contre les infections et les tumeurs. Les immunologistes cherchent depuis des années à identifier des molécules exprimées et présentées à la surface des tumeurs qui puissent être reconnues par des cellules T CD8? cytolytiques capables ensuite de tuer ces tumeurs de façon spécifique. Ce type de molécules représente la base pour le développement de vaccins contre le cancer puisqu?elles pourraient être injectées aux patients afin d?induire une réponse anti- tumorale. A présent, il y a très peu de molécules capables de stimuler le système immunitaire contre les tumeurs qui sont connues parce que les techniques développées à ce jour pour leur identification sont complexes et longues. Une nouvelle technique vient d?être proposée pour l?identification de ce type de molécules qui se base sur l?utilisation de librairies de peptides. Ces librairies représentent toutes les combinaisons possibles des composants de base des molécules recherchées. La première partie de cette étude a consisté à valider cette nouvelle technique en utilisant des cellules T CD8? cytolytiques capables de tuer des cellules tumorales en reconnaissant une molécule connue présente à leur surface. On a démontré que l?utilisation des librairies permet d?identifier la plupart des composants de base de la molécule reconnue par les cellules T CD8? cytolytiques utilisées. La deuxième partie de cette étude a consisté à effectuer une recherche des molécules potentiellement actives dans des protéines présentes dans des bases des données en utilisant un programme informatique qui permet de classer les molécules sur la base de leur activité biologique. Parmi des milliers de molécules de la base de données, celles reconnues par nos cellules T CD8? cytolytiques ont été trouvées parmi les plus actives. Plus intéressant encore, la combinaison de ces deux techniques nous a permis d?identifier la molécule reconnue par une population de cellules T CD8? cytolytiques ayant une activité anti-tumorale, mais pour laquelle on ne connaissait pas la spécificité. Nos résultats encouragent l?utilisation des librairies pour trouver et optimiser des molécules reconnues spécifiquement par des cellules T CD8? cytolytiques capables de tuer des tumeurs.
Resumo:
The article discusses the development of WEBDATANET established in 2011 which aims to create a multidisciplinary network of web-based data collection experts in Europe. Topics include the presence of 190 experts in 30 European countries and abroad, the establishment of web-based teaching and discussion platforms and working groups and task forces. Also discussed is the scope of the research carried by WEBDATANET. In light of the growing importance of web-based data in the social and behavioral sciences, WEBDATANET was established in 2011 as a COST Action (IS 1004) to create a multidisciplinary network of web-based data collection experts: (web) survey methodologists, psychologists, sociologists, linguists, economists, Internet scientists, media and public opinion researchers. The aim was to accumulate and synthesize knowledge regarding methodological issues of web-based data collection (surveys, experiments, tests, non-reactive data, and mobile Internet research), and foster its scientific usage in a broader community.
Resumo:
Despite the development of novel typing methods based on whole genome sequencing, most laboratories still rely on classical molecular methods for outbreak investigation or surveillance. Reference methods for Clostridium difficile include ribotyping and pulsed-field gel electrophoresis, which are band-comparing methods often difficult to establish and which require reference strain collections. Here, we present the double locus sequence typing (DLST) scheme as a tool to analyse C. difficile isolates. Using a collection of clinical C. difficile isolates recovered during a 1-year period, we evaluated the performance of DLST and compared the results to multilocus sequence typing (MLST), a sequence-based method that has been used to study the structure of bacterial populations and highlight major clones. DLST had a higher discriminatory power compared to MLST (Simpson's index of diversity of 0.979 versus 0.965) and successfully identified all isolates of the study (100 % typeability). Previous studies showed that the discriminatory power of ribotyping was comparable to that of MLST; thus, DLST might be more discriminatory than ribotyping. DLST is easy to establish and provides several advantages, including absence of DNA extraction [polymerase chain reaction (PCR) is performed on colonies], no specific instrumentation, low cost and unambiguous definition of types. Moreover, the implementation of a DLST typing scheme on an Internet database, such as that previously done for Staphylococcus aureus and Pseudomonas aeruginosa ( http://www.dlst.org ), will allow users to easily obtain the DLST type by submitting directly sequencing files and will avoid problems associated with multiple databases.
Resumo:
In the recent years, many protocols aimed at reproducibly sequencing reduced-genome subsets in non-model organisms have been published. Among them, RAD-sequencing is one of the most widely used. It relies on digesting DNA with specific restriction enzymes and performing size selection on the resulting fragments. Despite its acknowledged utility, this method is of limited use with degraded DNA samples, such as those isolated from museum specimens, as these samples are less likely to harbor fragments long enough to comprise two restriction sites making possible ligation of the adapter sequences (in the case of double-digest RAD) or performing size selection of the resulting fragments (in the case of single-digest RAD). Here, we address these limitations by presenting a novel method called hybridization RAD (hyRAD). In this approach, biotinylated RAD fragments, covering a random fraction of the genome, are used as baits for capturing homologous fragments from genomic shotgun sequencing libraries. This simple and cost-effective approach allows sequencing of orthologous loci even from highly degraded DNA samples, opening new avenues of research in the field of museum genomics. Not relying on the restriction site presence, it improves among-sample loci coverage. In a trial study, hyRAD allowed us to obtain a large set of orthologous loci from fresh and museum samples from a non-model butterfly species, with a high proportion of single nucleotide polymorphisms present in all eight analyzed specimens, including 58-year-old museum samples. The utility of the method was further validated using 49 museum and fresh samples of a Palearctic grasshopper species for which the spatial genetic structure was previously assessed using mtDNA amplicons. The application of the method is eventually discussed in a wider context. As it does not rely on the restriction site presence, it is therefore not sensitive to among-sample loci polymorphisms in the restriction sites that usually causes loci dropout. This should enable the application of hyRAD to analyses at broader evolutionary scales.
Resumo:
This study aimed at comparing the efficiency of various sampling materials for the collection and subsequent analysis of organic gunshot residues (OGSR). To the best of our knowledge, it is the first time that sampling devices were investigated in detail for further quantitation of OGSR by LC-MS. Seven sampling materials, namely two "swab"-type and five "stub"-type collection materials, were tested. The investigation started with the development of a simple and robust LC-MS method able to separate and quantify molecules typically found in gunpowders, such as diphenylamine or ethylcentralite. The evaluation of sampling materials was then systematically carried out by first analysing blank extracts of the materials to check for potential interferences and determining matrix effects. Based on these results, the best four materials, namely cotton buds, polyester swabs, a tape from 3M and PTFE were compared in terms of collection efficiency during shooting experiments using a set of 9 mm Luger ammunition. It was found that the tape was capable of recovering the highest amounts of OGSR. As tape-lifting is the technique currently used in routine for inorganic GSR, OGSR analysis might be implemented without modifying IGSR sampling and analysis procedure.