147 resultados para Efficient dominating set
Resumo:
In order to study the various health influencing parameters related to engineered nanoparticles as well as to soot emitted b diesel engines, there is an urgent need for appropriate sampling devices and methods for cell exposure studies that simulate the respiratory system and facilitate associated biological and toxicological tests. The objective of the present work was the further advancement of a Multiculture Exposure Chamber (MEC) into a dose-controlled system for efficient delivery of nanoparticles to cells. It was validated with various types of nanoparticles (diesel engine soot aggregates, engineered nanoparticles for various applications) and with state-of-the-art nanoparticle measurement instrumentation to assess the local deposition of nanoparticles on the cell cultures. The dose of nanoparticles to which cell cultures are being exposed was evaluated in the normal operation of the in vitro cell culture exposure chamber based on measurements of the size specific nanoparticle collection efficiency of a cell free device. The average efficiency in delivering nanoparticles in the MEC was approximately 82%. The nanoparticle deposition was demonstrated by Transmission Electron Microscopy (TEM). Analysis and design of the MEC employs Computational Fluid Dynamics (CFD) and true to geometry representations of nanoparticles with the aim to assess the uniformity of nanoparticle deposition among the culture wells. Final testing of the dose-controlled cell exposure system was performed by exposing A549 lung cell cultures to fluorescently labeled nanoparticles. Delivery of aerosolized nanoparticles was demonstrated by visualization of the nanoparticle fluorescence in the cell cultures following exposure. Also monitored was the potential of the aerosolized nanoparticles to generate reactive oxygen species (ROS) (e.g. free radicals and peroxides generation), thus expressing the oxidative stress of the cells which can cause extensive cellular damage or damage on DNA.
Resumo:
The aim of this article is to present an overview of salient issues of exposure, characterisation and hazard assessment of nanomaterials as they emerged from the consensus-building of experts undertaken within the four year European Commission coordination project NanoImpactNet. The approach adopted is to consolidate and condense the findings and problem-identification in such a way as to identify knowledge-gaps and generate a set of interim recommendations of use to industry, regulators, research bodies and funders. The categories of recommendation arising from the consensual view address: significant gaps in vital factual knowledge of exposure, characterisation and hazards; the development, dissemination and standardisation of appropriate laboratory protocols; address a wide range of technical issues in establishing an adequate risk assessment platform; the more efficient and coordinated gathering of basic data; greater inter-organisational cooperation; regulatory harmonization; the wider use of the life-cycle approaches; and the wider involvement of all stakeholders in the discussion and solution-finding efforts for nanosafety.
Resumo:
Allergen-specific immunotherapy is the only immunomodulatory and etiological therapy of allergy and asthma. Conventional specific immunotherapy (SIT) with whole-allergen extract is antigen specific, effective on multiple organs, efficient on asthma in defined conditions, provides long-lasting protection and is cost effective. Moreover, SIT is able to prevent the course of rhinitis to asthma. SIT has its drawbacks: the long duration of treatment, the unsatisfactory standardization of allergen extracts and a questionable safety level. Novel approaches are aimed at drastically reducing adverse anaphylactic events, shortening the duration of therapy and improving its efficacy. Novel promising approaches have based their formulation on a limited set of recombinant allergens or chimeric molecules as well as on hypoallergenic allergen fragments or peptides. The simultaneous use of adjuvants with immunomodulatory properties may contribute to improve both the safety and efficacy of allergen-SIT of allergy and asthma.
Resumo:
The motivation for this research initiated from the abrupt rise and fall of minicomputers which were initially used both for industrial automation and business applications due to their significantly lower cost than their predecessors, the mainframes. Later industrial automation developed its own vertically integrated hardware and software to address the application needs of uninterrupted operations, real-time control and resilience to harsh environmental conditions. This has led to the creation of an independent industry, namely industrial automation used in PLC, DCS, SCADA and robot control systems. This industry employs today over 200'000 people in a profitable slow clockspeed context in contrast to the two mainstream computing industries of information technology (IT) focused on business applications and telecommunications focused on communications networks and hand-held devices. Already in 1990s it was foreseen that IT and communication would merge into one Information and communication industry (ICT). The fundamental question of the thesis is: Could industrial automation leverage a common technology platform with the newly formed ICT industry? Computer systems dominated by complex instruction set computers (CISC) were challenged during 1990s with higher performance reduced instruction set computers (RISC). RISC started to evolve parallel to the constant advancement of Moore's law. These developments created the high performance and low energy consumption System-on-Chip architecture (SoC). Unlike to the CISC processors RISC processor architecture is a separate industry from the RISC chip manufacturing industry. It also has several hardware independent software platforms consisting of integrated operating system, development environment, user interface and application market which enables customers to have more choices due to hardware independent real time capable software applications. An architecture disruption merged and the smartphone and tablet market were formed with new rules and new key players in the ICT industry. Today there are more RISC computer systems running Linux (or other Unix variants) than any other computer system. The astonishing rise of SoC based technologies and related software platforms in smartphones created in unit terms the largest installed base ever seen in the history of computers and is now being further extended by tablets. An underlying additional element of this transition is the increasing role of open source technologies both in software and hardware. This has driven the microprocessor based personal computer industry with few dominating closed operating system platforms into a steep decline. A significant factor in this process has been the separation of processor architecture and processor chip production and operating systems and application development platforms merger into integrated software platforms with proprietary application markets. Furthermore the pay-by-click marketing has changed the way applications development is compensated: Three essays on major trends in a slow clockspeed industry: The case of industrial automation 2014 freeware, ad based or licensed - all at a lower price and used by a wider customer base than ever before. Moreover, the concept of software maintenance contract is very remote in the app world. However, as a slow clockspeed industry, industrial automation has remained intact during the disruptions based on SoC and related software platforms in the ICT industries. Industrial automation incumbents continue to supply systems based on vertically integrated systems consisting of proprietary software and proprietary mainly microprocessor based hardware. They enjoy admirable profitability levels on a very narrow customer base due to strong technology-enabled customer lock-in and customers' high risk leverage as their production is dependent on fault-free operation of the industrial automation systems. When will this balance of power be disrupted? The thesis suggests how industrial automation could join the mainstream ICT industry and create an information, communication and automation (ICAT) industry. Lately the Internet of Things (loT) and weightless networks, a new standard leveraging frequency channels earlier occupied by TV broadcasting, have gradually started to change the rigid world of Machine to Machine (M2M) interaction. It is foreseeable that enough momentum will be created that the industrial automation market will in due course face an architecture disruption empowered by these new trends. This thesis examines the current state of industrial automation subject to the competition between the incumbents firstly through a research on cost competitiveness efforts in captive outsourcing of engineering, research and development and secondly researching process re- engineering in the case of complex system global software support. Thirdly we investigate the industry actors', namely customers, incumbents and newcomers, views on the future direction of industrial automation and conclude with our assessments of the possible routes industrial automation could advance taking into account the looming rise of the Internet of Things (loT) and weightless networks. Industrial automation is an industry dominated by a handful of global players each of them focusing on maintaining their own proprietary solutions. The rise of de facto standards like IBM PC, Unix and Linux and SoC leveraged by IBM, Compaq, Dell, HP, ARM, Apple, Google, Samsung and others have created new markets of personal computers, smartphone and tablets and will eventually also impact industrial automation through game changing commoditization and related control point and business model changes. This trend will inevitably continue, but the transition to a commoditized industrial automation will not happen in the near future.
Resumo:
We propose a compressive sensing algorithm that exploits geometric properties of images to recover images of high quality from few measurements. The image reconstruction is done by iterating the two following steps: 1) estimation of normal vectors of the image level curves, and 2) reconstruction of an image fitting the normal vectors, the compressed sensing measurements, and the sparsity constraint. The proposed technique can naturally extend to nonlocal operators and graphs to exploit the repetitive nature of textured images to recover fine detail structures. In both cases, the problem is reduced to a series of convex minimization problems that can be efficiently solved with a combination of variable splitting and augmented Lagrangian methods, leading to fast and easy-to-code algorithms. Extended experiments show a clear improvement over related state-of-the-art algorithms in the quality of the reconstructed images and the robustness of the proposed method to noise, different kind of images, and reduced measurements.
Resumo:
Despite major progress in T lymphocyte analysis in melanoma patients, TCR repertoire selection and kinetics in response to tumor Ags remain largely unexplored. In this study, using a novel ex vivo molecular-based approach at the single-cell level, we identified a single, naturally primed T cell clone that dominated the human CD8(+) T cell response to the Melan-A/MART-1 Ag. The dominant clone expressed a high-avidity TCR to cognate tumor Ag, efficiently killed tumor cells, and prevailed in the differentiated effector-memory T lymphocyte compartment. TCR sequencing also revealed that this particular clone arose at least 1 year before vaccination, displayed long-term persistence, and efficient homing to metastases. Remarkably, during concomitant vaccination over 3.5 years, the frequency of the pre-existing clone progressively increased, reaching up to 2.5% of the circulating CD8 pool while its effector functions were enhanced. In parallel, the disease stabilized, but subsequently progressed with loss of Melan-A expression by melanoma cells. Collectively, combined ex vivo analysis of T cell differentiation and clonality revealed for the first time a strong expansion of a tumor Ag-specific human T cell clone, comparable to protective virus-specific T cells. The observed successful boosting by peptide vaccination support further development of immunotherapy by including strategies to overcome immune escape.
Resumo:
We investigated a new procedure for gene transfer into the stroma of pig cornea for the delivery of therapeutic factors. A delimited space was created at 110 mum depth with a LDV femtosecond laser in pig corneas, and a HIV1-derived lentiviral vector expressing green fluorescent protein (GFP) (LV-CMV-GFP) was injected into the pocket. Corneas were subsequently dissected and kept in culture as explants. After 5 days, histological analysis of the explants revealed that the corneal pockets had closed and that the gene transfer procedure was efficient over the whole pocket area. Almost all the keratocytes were transduced in this area. Vector diffusion at right angles to the pocket's plane encompasses four (endothelium side) to 10 (epithelium side) layers of keratocytes. After 21 days, the level of transduction was similar to the results obtained after 5 days. The femtosecond laser technique allows a reliable injection and diffusion of lentiviral vectors to efficiently transduce stromal cells in a delimited area. Showing the efficacy of this procedure in vivo could represent an important step toward treatment or prevention of recurrent angiogenesis of the corneal stroma.
Resumo:
In order to induce a therapeutic T lymphocyte response, recombinant viral vaccines are designed to target professional antigen-presenting cells (APC) such as dendritic cells (DC). A key requirement for their use in humans is safe and efficient gene delivery. The present study assesses third-generation lentivectors with respect to their ability to transduce human and mouse DC and to induce antigen-specific CD8+ T-cell responses. We demonstrate that third-generation lentivectors transduce DC with a superior efficiency compared to adenovectors. The transfer of DC transduced with a recombinant lentivector encoding an antigenic epitope resulted in a strong specific CD8+ T-cell response in mice. The occurrence of lower proportions of nonspecifically activated CD8+ cells suggests a lower antivector immunity of lentivector compared to adenovector. Thus, lentivectors, in addition to their promise for gene therapy of brain disorders might also be suitable for immunotherapy.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
The present study evaluates the potential of third-generation lentivirus vectors with respect to their use as in vivo-administered T cell vaccines. We demonstrate that lentivector injection into the footpad of mice transduces DCs that appear in the draining lymph node and in the spleen. In addition, a lentivector vaccine bearing a T cell antigen induced very strong systemic antigen-specific cytotoxic T lymphocyte (CTL) responses in mice. Comparative vaccination performed in two different antigen models demonstrated that in vivo administration of lentivector was superior to transfer of transduced DCs or peptide/adjuvant vaccination in terms of both amplitude and longevity of the CTL response. Our data suggest that a decisive factor for efficient T cell priming by lentivector might be the targeting of DCs in situ and their subsequent migration to secondary lymphoid organs. The combination of performance, ease of application, and absence of pre-existing immunity in humans make lentivector-based vaccines an attractive candidate for cancer immunotherapy.
Resumo:
Recombinant human TNF (rhTNF) has a selective effect on endothelial cells in tumour angiogenic vessels. Its clinical use has been limited because of its property to induce vascular collapsus. TNF administration through isolated limb perfusion (ILP) for regionally advanced melanomas and soft tissue sarcomas of the limbs was shown to be safe and efficient. When combined to the alkylating agent melphalan, a single ILP produces a very high objective response rate. ILP with TNF and melphalan provided the proof of concept that a vasculotoxic strategy combined to chemotherapy may produce a strong anti-tumour effect. The registered indication of TNF-based ILP is a regional therapy for regionally spread tumours. In soft tissue sarcomas, it is a limb sparing neoadjuvant treatment and, in melanoma in-transit metastases, a curative treatment. Despite its demonstrated regional efficiency TNF-based ILP is unlikely to have any impact on survival. High TNF dosages induce endothelial cells apoptosis, leading to vascular destruction. However, lower TNF dosage produces a very strong effect that is to increase the drug penetration into the tumour, presumably by decreasing the intratumoural hypertension resulting in better tumour uptake. TNF-ILP allowed the identification of the role of alphaVbeta3 integrin deactivation as an important mechanism of antiangiogenesis. Several recent studies have shown that TNF targeting is possible, paving the way to a new opportunity to administer TNF systemically for improving cancer drug penetration. TNF was the first agent registered for the treatment of cancer that improves drug penetration in tumours and selectively destroys angiogenic vessels.
Resumo:
Within the ENCODE Consortium, GENCODE aimed to accurately annotate all protein-coding genes, pseudogenes, and noncoding transcribed loci in the human genome through manual curation and computational methods. Annotated transcript structures were assessed, and less well-supported loci were systematically, experimentally validated. Predicted exon-exon junctions were evaluated by RT-PCR amplification followed by highly multiplexed sequencing readout, a method we called RT-PCR-seq. Seventy-nine percent of all assessed junctions are confirmed by this evaluation procedure, demonstrating the high quality of the GENCODE gene set. RT-PCR-seq was also efficient to screen gene models predicted using the Human Body Map (HBM) RNA-seq data. We validated 73% of these predictions, thus confirming 1168 novel genes, mostly noncoding, which will further complement the GENCODE annotation. Our novel experimental validation pipeline is extremely sensitive, far more than unbiased transcriptome profiling through RNA sequencing, which is becoming the norm. For example, exon-exon junctions unique to GENCODE annotated transcripts are five times more likely to be corroborated with our targeted approach than with extensive large human transcriptome profiling. Data sets such as the HBM and ENCODE RNA-seq data fail sampling of low-expressed transcripts. Our RT-PCR-seq targeted approach also has the advantage of identifying novel exons of known genes, as we discovered unannotated exons in ~11% of assessed introns. We thus estimate that at least 18% of known loci have yet-unannotated exons. Our work demonstrates that the cataloging of all of the genic elements encoded in the human genome will necessitate a coordinated effort between unbiased and targeted approaches, like RNA-seq and RT-PCR-seq.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.