186 resultados para EFFICIENT ESTIMATION
Resumo:
Despite major progress in T lymphocyte analysis in melanoma patients, TCR repertoire selection and kinetics in response to tumor Ags remain largely unexplored. In this study, using a novel ex vivo molecular-based approach at the single-cell level, we identified a single, naturally primed T cell clone that dominated the human CD8(+) T cell response to the Melan-A/MART-1 Ag. The dominant clone expressed a high-avidity TCR to cognate tumor Ag, efficiently killed tumor cells, and prevailed in the differentiated effector-memory T lymphocyte compartment. TCR sequencing also revealed that this particular clone arose at least 1 year before vaccination, displayed long-term persistence, and efficient homing to metastases. Remarkably, during concomitant vaccination over 3.5 years, the frequency of the pre-existing clone progressively increased, reaching up to 2.5% of the circulating CD8 pool while its effector functions were enhanced. In parallel, the disease stabilized, but subsequently progressed with loss of Melan-A expression by melanoma cells. Collectively, combined ex vivo analysis of T cell differentiation and clonality revealed for the first time a strong expansion of a tumor Ag-specific human T cell clone, comparable to protective virus-specific T cells. The observed successful boosting by peptide vaccination support further development of immunotherapy by including strategies to overcome immune escape.
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
We investigated a new procedure for gene transfer into the stroma of pig cornea for the delivery of therapeutic factors. A delimited space was created at 110 mum depth with a LDV femtosecond laser in pig corneas, and a HIV1-derived lentiviral vector expressing green fluorescent protein (GFP) (LV-CMV-GFP) was injected into the pocket. Corneas were subsequently dissected and kept in culture as explants. After 5 days, histological analysis of the explants revealed that the corneal pockets had closed and that the gene transfer procedure was efficient over the whole pocket area. Almost all the keratocytes were transduced in this area. Vector diffusion at right angles to the pocket's plane encompasses four (endothelium side) to 10 (epithelium side) layers of keratocytes. After 21 days, the level of transduction was similar to the results obtained after 5 days. The femtosecond laser technique allows a reliable injection and diffusion of lentiviral vectors to efficiently transduce stromal cells in a delimited area. Showing the efficacy of this procedure in vivo could represent an important step toward treatment or prevention of recurrent angiogenesis of the corneal stroma.
Resumo:
In order to induce a therapeutic T lymphocyte response, recombinant viral vaccines are designed to target professional antigen-presenting cells (APC) such as dendritic cells (DC). A key requirement for their use in humans is safe and efficient gene delivery. The present study assesses third-generation lentivectors with respect to their ability to transduce human and mouse DC and to induce antigen-specific CD8+ T-cell responses. We demonstrate that third-generation lentivectors transduce DC with a superior efficiency compared to adenovectors. The transfer of DC transduced with a recombinant lentivector encoding an antigenic epitope resulted in a strong specific CD8+ T-cell response in mice. The occurrence of lower proportions of nonspecifically activated CD8+ cells suggests a lower antivector immunity of lentivector compared to adenovector. Thus, lentivectors, in addition to their promise for gene therapy of brain disorders might also be suitable for immunotherapy.
Resumo:
This paper presents a new and original variational framework for atlas-based segmentation. The proposed framework integrates both the active contour framework, and the dense deformation fields of optical flow framework. This framework is quite general and encompasses many of the state-of-the-art atlas-based segmentation methods. It also allows to perform the registration of atlas and target images based on only selected structures of interest. The versatility and potentiality of the proposed framework are demonstrated by presenting three diverse applications: In the first application, we show how the proposed framework can be used to simulate the growth of inconsistent structures like a tumor in an atlas. In the second application, we estimate the position of nonvisible brain structures based on the surrounding structures and validate the results by comparing with other methods. In the final application, we present the segmentation of lymph nodes in the Head and Neck CT images, and demonstrate how multiple registration forces can be used in this framework in an hierarchical manner.
Resumo:
The present study evaluates the potential of third-generation lentivirus vectors with respect to their use as in vivo-administered T cell vaccines. We demonstrate that lentivector injection into the footpad of mice transduces DCs that appear in the draining lymph node and in the spleen. In addition, a lentivector vaccine bearing a T cell antigen induced very strong systemic antigen-specific cytotoxic T lymphocyte (CTL) responses in mice. Comparative vaccination performed in two different antigen models demonstrated that in vivo administration of lentivector was superior to transfer of transduced DCs or peptide/adjuvant vaccination in terms of both amplitude and longevity of the CTL response. Our data suggest that a decisive factor for efficient T cell priming by lentivector might be the targeting of DCs in situ and their subsequent migration to secondary lymphoid organs. The combination of performance, ease of application, and absence of pre-existing immunity in humans make lentivector-based vaccines an attractive candidate for cancer immunotherapy.
Resumo:
Recombinant human TNF (rhTNF) has a selective effect on endothelial cells in tumour angiogenic vessels. Its clinical use has been limited because of its property to induce vascular collapsus. TNF administration through isolated limb perfusion (ILP) for regionally advanced melanomas and soft tissue sarcomas of the limbs was shown to be safe and efficient. When combined to the alkylating agent melphalan, a single ILP produces a very high objective response rate. ILP with TNF and melphalan provided the proof of concept that a vasculotoxic strategy combined to chemotherapy may produce a strong anti-tumour effect. The registered indication of TNF-based ILP is a regional therapy for regionally spread tumours. In soft tissue sarcomas, it is a limb sparing neoadjuvant treatment and, in melanoma in-transit metastases, a curative treatment. Despite its demonstrated regional efficiency TNF-based ILP is unlikely to have any impact on survival. High TNF dosages induce endothelial cells apoptosis, leading to vascular destruction. However, lower TNF dosage produces a very strong effect that is to increase the drug penetration into the tumour, presumably by decreasing the intratumoural hypertension resulting in better tumour uptake. TNF-ILP allowed the identification of the role of alphaVbeta3 integrin deactivation as an important mechanism of antiangiogenesis. Several recent studies have shown that TNF targeting is possible, paving the way to a new opportunity to administer TNF systemically for improving cancer drug penetration. TNF was the first agent registered for the treatment of cancer that improves drug penetration in tumours and selectively destroys angiogenic vessels.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Rapid amplification of cDNA ends (RACE) is a widely used approach for transcript identification. Random clone selection from the RACE mixture, however, is an ineffective sampling strategy if the dynamic range of transcript abundances is large. To improve sampling efficiency of human transcripts, we hybridized the products of the RACE reaction onto tiling arrays and used the detected exons to delineate a series of reverse-transcriptase (RT)-PCRs, through which the original RACE transcript population was segregated into simpler transcript populations. We independently cloned the products and sequenced randomly selected clones. This approach, RACEarray, is superior to direct cloning and sequencing of RACE products because it specifically targets new transcripts and often results in overall normalization of transcript abundance. We show theoretically and experimentally that this strategy leads indeed to efficient sampling of new transcripts, and we investigated multiplexing the strategy by pooling RACE reactions from multiple interrogated loci before hybridization.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.