54 resultados para Data fusion applications
Resumo:
Over the past decade, CMRA has emerged as a unique clinical imaging tool with applications in selected populations. Patients with suspected coronary artery anomalies and patients with Kawasaki disease and coronary aneurysms are among those for whom CMRA has demonstrated clinical usefulness. For assessment of patients with atherosclerotic CAD, CMRA is useful for detection of patency of bypass grafts. At centers with appropriate expertise and resources, CMRA also appears to be of value for exclusion of severe proximal multivessel CAD in selected patients. Data from multicenter trials will continue to define the clinical role of CMRA, particularly as it relates to assessment of CAD. Future developments and enhancements of CMRA promise better lumen and coronary artery wall imaging. This may become the new target in noninvasive evaluation of CAD.
Resumo:
Intracellular membrane fusion proceeds via distinct stages of membrane docking, hemifusion and fusion pore opening and depends on interacting families of Rab, SNARE and SM proteins. Trans-SNARE complexes dock the membranes in close apposition. Efficient fusion requires further SNARE-associated proteins. They might increase the number of trans-SNARE complexes or the fusogenic potential of a single SNARE complex. We investigated the contributions of the SM protein Vps33 to hemifusion and pore opening between yeast vacuoles. Mutations in Vps33 that weaken its interactions with the SNARE complex allowed normal trans-SNARE pairing and lipid mixing but retarded content mixing. Deleting the H(abc) domain of the vacuolar t-SNARE Vam3, which interacts with Vps33, had the same effect. This suggests that SM proteins promote fusion pore opening by enhancing the fusogenic activity of a SNARE complex. They should thus be considered integral parts of the fusion machinery.
Resumo:
This paper focuses on the switching behaviour of enrolees in the Swiss basic health insurance system. Even though the new Federal Law on Social Health Insurance (LAMal) was implemented in 1996 to promote competition among health insurers in basic insurance, there is limited evidence of premium convergence within cantons. This indicates that competition has not been effective so far, and reveals some inertia among consumers who seem reluctant to switch to less expensive funds. We investigate one possible barrier to switching behaviour, namely the influence of supplementary insurance. We use survey data on health plan choice (a sample of 1943 individuals whose switching behaviours were observed between 1997 and 2000) as well as administrative data relative to all insurance companies that operated in the 26 Swiss cantons between 1996 and 2005. The decision to switch and the decision to subscribe to a supplementary contract are jointly estimated.Our findings show that holding a supplementary insurance contract substantially decreases the propensity to switch. However, there is no negative impact of supplementary insurance on switching when the individual assesses his/her health as 'very good'. Our results give empirical support to one possible mechanism through which supplementary insurance might influence switching decisions: given that subscribing to basic and supplementary contracts with two different insurers may induce some administrative costs for the subscriber, holding supplementary insurance acts as a barrier to switch if customers who consider themselves 'bad risks' also believe that insurers reject applications for supplementary insurance on these grounds. In comparison with previous research, our main contribution is to offer a possible explanation for consumer inertia. Our analysis illustrates how consumer choice for one's basic health plan interacts with the decision to subscribe to supplementary insurance.
A filtering method to correct time-lapse 3D ERT data and improve imaging of natural aquifer dynamics
Resumo:
We have developed a processing methodology that allows crosshole ERT (electrical resistivity tomography) monitoring data to be used to derive temporal fluctuations of groundwater electrical resistivity and thereby characterize the dynamics of groundwater in a gravel aquifer as it is infiltrated by river water. Temporal variations of the raw ERT apparent-resistivity data were mainly sensitive to the resistivity (salinity), temperature and height of the groundwater, with the relative contributions of these effects depending on the time and the electrode configuration. To resolve the changes in groundwater resistivity, we first expressed fluctuations of temperature-detrended apparent-resistivity data as linear superpositions of (i) time series of riverwater-resistivity variations convolved with suitable filter functions and (ii) linear and quadratic representations of river-water-height variations multiplied by appropriate sensitivity factors; river-water height was determined to be a reliable proxy for groundwater height. Individual filter functions and sensitivity factors were obtained for each electrode configuration via deconvolution using a one month calibration period and then the predicted contributions related to changes in water height were removed prior to inversion of the temperature-detrended apparent-resistivity data. Applications of the filter functions and sensitivity factors accurately predicted the apparent-resistivity variations (the correlation coefficient was 0.98). Furthermore, the filtered ERT monitoring data and resultant time-lapse resistivity models correlated closely with independently measured groundwater electrical resistivity monitoring data and only weakly with the groundwater-height fluctuations. The inversion results based on the filtered ERT data also showed significantly less inversion artefacts than the raw data inversions. We observed resistivity increases of up to 10% and the arrival time peaks in the time-lapse resistivity models matched those in the groundwater resistivity monitoring data.
Resumo:
Prediction of species' distributions is central to diverse applications in ecology, evolution and conservation science. There is increasing electronic access to vast sets of occurrence records in museums and herbaria, yet little effective guidance on how best to use this information in the context of numerous approaches for modelling distributions. To meet this need, we compared 16 modelling methods over 226 species from 6 regions of the world, creating the most comprehensive set of model comparisons to date. We used presence-only data to fit models, and independent presence-absence data to evaluate the predictions. Along with well-established modelling methods such as generalised additive models and GARP and BIOCLIM, we explored methods that either have been developed recently or have rarely been applied to modelling species' distributions. These include machine-learning methods and community models, both of which have features that may make them particularly well suited to noisy or sparse information, as is typical of species' occurrence data. Presence-only data were effective for modelling species' distributions for many species and regions. The novel methods consistently outperformed more established methods. The results of our analysis are promising for the use of data from museums and herbaria, especially as methods suited to the noise inherent in such data improve.
Resumo:
In many practical applications the state of field soils is monitored by recording the evolution of temperature and soil moisture at discrete depths. We theoretically investigate the systematic errors that arise when mass and energy balances are computed directly from these measurements. We show that, even with no measurement or model errors, large residuals might result when finite difference approximations are used to compute fluxes and storage term. To calculate the limits set by the use of spatially discrete measurements on the accuracy of balance closure, we derive an analytical solution to estimate the residual on the basis of the two key parameters: the penetration depth and the distance between the measurements. When the thickness of the control layer for which the balance is computed is comparable to the penetration depth of the forcing (which depends on the thermal diffusivity and on the forcing period) large residuals arise. The residual is also very sensitive to the distance between the measurements, which requires accurately controlling the position of the sensors in field experiments. We also demonstrate that, for the same experimental setup, mass residuals are sensitively larger than the energy residuals due to the nonlinearity of the moisture transport equation. Our analysis suggests that a careful assessment of the systematic mass error introduced by the use of spatially discrete data is required before using fluxes and residuals computed directly from field measurements.
Resumo:
The biological properties of wild-type A75/17 and cell culture-adapted Onderstepoort canine distemper virus differ markedly. To learn more about the molecular basis for these differences, we have isolated and sequenced the protein-coding regions of the attachment and fusion proteins of wild-type canine distemper virus strain A75/17. In the attachment protein, a total of 57 amino acid differences were observed between the Onderstepoort strain and strain A75/17, and these were distributed evenly over the entire protein. Interestingly, the attachment protein of strain A75/17 contained an extension of three amino acids at the C terminus. Expression studies showed that the attachment protein of strain A75/17 had a higher apparent molecular mass than the attachment protein of the Onderstepoort strain, in both the presence and absence of tunicamycin. In the fusion protein, 60 amino acid differences were observed between the two strains, of which 44 were clustered in the much smaller F2 portion of the molecule. Significantly, the AUG that has been proposed as a translation initiation codon in the Onderstepoort strain is an AUA codon in strain A75/17. Detailed mutation analyses showed that both the first and second AUGs of strain A75/17 are the major translation initiation sites of the fusion protein. Similar analyses demonstrated that, also in the Onderstepoort strain, the first two AUGs are the translation initiation codons which contribute most to the generation of precursor molecules yielding the mature form of the fusion protein.
Resumo:
A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
Abstract : The human body is composed of a huge number of cells acting together in a concerted manner. The current understanding is that proteins perform most of the necessary activities in keeping a cell alive. The DNA, on the other hand, stores the information on how to produce the different proteins in the genome. Regulating gene transcription is the first important step that can thus affect the life of a cell, modify its functions and its responses to the environment. Regulation is a complex operation that involves specialized proteins, the transcription factors. Transcription factors (TFs) can bind to DNA and activate the processes leading to the expression of genes into new proteins. Errors in this process may lead to diseases. In particular, some transcription factors have been associated with a lethal pathological state, commonly known as cancer, associated with uncontrolled cellular proliferation, invasiveness of healthy tissues and abnormal responses to stimuli. Understanding cancer-related regulatory programs is a difficult task, often involving several TFs interacting together and influencing each other's activity. This Thesis presents new computational methodologies to study gene regulation. In addition we present applications of our methods to the understanding of cancer-related regulatory programs. The understanding of transcriptional regulation is a major challenge. We address this difficult question combining computational approaches with large collections of heterogeneous experimental data. In detail, we design signal processing tools to recover transcription factors binding sites on the DNA from genome-wide surveys like chromatin immunoprecipitation assays on tiling arrays (ChIP-chip). We then use the localization about the binding of TFs to explain expression levels of regulated genes. In this way we identify a regulatory synergy between two TFs, the oncogene C-MYC and SP1. C-MYC and SP1 bind preferentially at promoters and when SP1 binds next to C-NIYC on the DNA, the nearby gene is strongly expressed. The association between the two TFs at promoters is reflected by the binding sites conservation across mammals, by the permissive underlying chromatin states 'it represents an important control mechanism involved in cellular proliferation, thereby involved in cancer. Secondly, we identify the characteristics of TF estrogen receptor alpha (hERa) target genes and we study the influence of hERa in regulating transcription. hERa, upon hormone estrogen signaling, binds to DNA to regulate transcription of its targets in concert with its co-factors. To overcome the scarce experimental data about the binding sites of other TFs that may interact with hERa, we conduct in silico analysis of the sequences underlying the ChIP sites using the collection of position weight matrices (PWMs) of hERa partners, TFs FOXA1 and SP1. We combine ChIP-chip and ChIP-paired-end-diTags (ChIP-pet) data about hERa binding on DNA with the sequence information to explain gene expression levels in a large collection of cancer tissue samples and also on studies about the response of cells to estrogen. We confirm that hERa binding sites are distributed anywhere on the genome. However, we distinguish between binding sites near promoters and binding sites along the transcripts. The first group shows weak binding of hERa and high occurrence of SP1 motifs, in particular near estrogen responsive genes. The second group shows strong binding of hERa and significant correlation between the number of binding sites along a gene and the strength of gene induction in presence of estrogen. Some binding sites of the second group also show presence of FOXA1, but the role of this TF still needs to be investigated. Different mechanisms have been proposed to explain hERa-mediated induction of gene expression. Our work supports the model of hERa activating gene expression from distal binding sites by interacting with promoter bound TFs, like SP1. hERa has been associated with survival rates of breast cancer patients, though explanatory models are still incomplete: this result is important to better understand how hERa can control gene expression. Thirdly, we address the difficult question of regulatory network inference. We tackle this problem analyzing time-series of biological measurements such as quantification of mRNA levels or protein concentrations. Our approach uses the well-established penalized linear regression models where we impose sparseness on the connectivity of the regulatory network. We extend this method enforcing the coherence of the regulatory dependencies: a TF must coherently behave as an activator, or a repressor on all its targets. This requirement is implemented as constraints on the signs of the regressed coefficients in the penalized linear regression model. Our approach is better at reconstructing meaningful biological networks than previous methods based on penalized regression. The method is tested on the DREAM2 challenge of reconstructing a five-genes/TFs regulatory network obtaining the best performance in the "undirected signed excitatory" category. Thus, these bioinformatics methods, which are reliable, interpretable and fast enough to cover large biological dataset, have enabled us to better understand gene regulation in humans.
Resumo:
The bacterial insertion sequence IS21 contains two genes, istA and istB, which are organized as an operon. IS21 spontaneously forms tandem repeats designated (IS21)2. Plasmids carrying (IS21)2 react efficiently with other replicons, producing cointegrates via a cut-and-paste mechanism. Here we show that transposition of a single IS21 element (simple insertion) and cointegrate formation involving (IS21)2 result from two distinct non-replicative pathways, which are essentially due to two differentiated IstA proteins, transposase and cointegrase. In Escherichia coli, transposase was characterized as the full-length, 46 kDa product of the istA gene, whereas the 45 kDa cointegrase was expressed, in-frame, from a natural internal translation start of istA. The istB gene, which could be experimentally disconnected from istA, provided a helper protein that strongly stimulated the transposase and cointegrase-driven reactions. Site-directed mutagenesis was used to express either cointegrase or transposase from the istA gene. Cointegrase promoted replicon fusion at high frequencies by acting on IS21 ends which were linked by 2, 3, or 4 bp junction sequences in (IS21)2. By contrast, cointegrase poorly catalyzed simple insertion of IS21 elements. Transposase had intermediate, uniform activity in both pathways. The ability of transposase to synapse two widely spaced IS21 ends may reside in the eight N-terminal amino acid residues which are absent from cointegrase. Given the 2 or 3 bp spacing in naturally occurring IS21 tandems and the specialization of cointegrase, the fulminant spread of IS21 via cointegration can now be understood.
Resumo:
1. Few examples of habitat-modelling studies of rare and endangered species exist in the literature, although from a conservation perspective predicting their distribution would prove particularly useful. Paucity of data and lack of valid absences are the probable reasons for this shortcoming. Analytic solutions to accommodate the lack of absence include the ecological niche factor analysis (ENFA) and the use of generalized linear models (GLM) with simulated pseudo-absences. 2. In this study we tested a new approach to generating pseudo-absences, based on a preliminary ENFA habitat suitability (HS) map, for the endangered species Eryngium alpinum. This method of generating pseudo-absences was compared with two others: (i) use of a GLM with pseudo-absences generated totally at random, and (ii) use of an ENFA only. 3. The influence of two different spatial resolutions (i.e. grain) was also assessed for tackling the dilemma of quality (grain) vs. quantity (number of occurrences). Each combination of the three above-mentioned methods with the two grains generated a distinct HS map. 4. Four evaluation measures were used for comparing these HS maps: total deviance explained, best kappa, Gini coefficient and minimal predicted area (MPA). The last is a new evaluation criterion proposed in this study. 5. Results showed that (i) GLM models using ENFA-weighted pseudo-absence provide better results, except for the MPA value, and that (ii) quality (spatial resolution and locational accuracy) of the data appears to be more important than quantity (number of occurrences). Furthermore, the proposed MPA value is suggested as a useful measure of model evaluation when used to complement classical statistical measures. 6. Synthesis and applications. We suggest that the use of ENFA-weighted pseudo-absence is a possible way to enhance the quality of GLM-based potential distribution maps and that data quality (i.e. spatial resolution) prevails over quantity (i.e. number of data). Increased accuracy of potential distribution maps could help to define better suitable areas for species protection and reintroduction.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Data characteristics and species traits are expected to influence the accuracy with which species' distributions can be modeled and predicted. We compare 10 modeling techniques in terms of predictive power and sensitivity to location error, change in map resolution, and sample size, and assess whether some species traits can explain variation in model performance. We focused on 30 native tree species in Switzerland and used presence-only data to model current distribution, which we evaluated against independent presence-absence data. While there are important differences between the predictive performance of modeling methods, the variance in model performance is greater among species than among techniques. Within the range of data perturbations in this study, some extrinsic parameters of data affect model performance more than others: location error and sample size reduced performance of many techniques, whereas grain had little effect on most techniques. No technique can rescue species that are difficult to predict. The predictive power of species-distribution models can partly be predicted from a series of species characteristics and traits based on growth rate, elevational distribution range, and maximum elevation. Slow-growing species or species with narrow and specialized niches tend to be better modeled. The Swiss presence-only tree data produce models that are reliable enough to be useful in planning and management applications.
Resumo:
IMPORTANCE OF THE FIELD: Promising immunotherapeutic agents targeting co-stimulatory pathways are currently being tested in clinical trials. One player in this array of regulatory pathways is the LAG-3/MHC class II axis. The lymphocyte activation gene-3 (LAG-3) is a negative co-stimulatory receptor that modulates T cell homeostasis, proliferation and activation. A recombinant soluble dimeric form of LAG-3 (sLAG-3-Ig, IMP321) shows adjuvant properties and enhances immunogenicity of tumor vaccines. Recent clinical trials produced encouraging results, especially when the human dimeric soluble form of LAG-3 (hLAG-3-Ig) was used in combination with chemotherapy. AREAS COVERED IN THIS REVIEW: The biological relevance of LAG-3 in vivo. Pre-clinical data demonstrating adjuvant properties, as well as the improvement of tumor immunity by sLAG-3-Ig. Recent advances in the clinical development of the therapeutic reagent IMP321, hLAG-3-Ig, for cancer treatment. WHAT THE READER WILL GAIN: This review summarizes preclinical and clinical data on the biological functions of LAG-3. TAKE HOME MESSAGE: The LAG-3 inhibitory pathway is attracting attention, in the light of recent studies demonstrating its role in T cell unresponsiveness, and Treg function after chronic antigen stimulation. As a soluble recombinant dimer, the sLAG-3-Ig protein acts as an adjuvant for therapeutic induction of T cell responses, and may be beneficial to cancer patients when used in combination therapies.