836 resultados para Data fusion applications


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data characteristics and species traits are expected to influence the accuracy with which species' distributions can be modeled and predicted. We compare 10 modeling techniques in terms of predictive power and sensitivity to location error, change in map resolution, and sample size, and assess whether some species traits can explain variation in model performance. We focused on 30 native tree species in Switzerland and used presence-only data to model current distribution, which we evaluated against independent presence-absence data. While there are important differences between the predictive performance of modeling methods, the variance in model performance is greater among species than among techniques. Within the range of data perturbations in this study, some extrinsic parameters of data affect model performance more than others: location error and sample size reduced performance of many techniques, whereas grain had little effect on most techniques. No technique can rescue species that are difficult to predict. The predictive power of species-distribution models can partly be predicted from a series of species characteristics and traits based on growth rate, elevational distribution range, and maximum elevation. Slow-growing species or species with narrow and specialized niches tend to be better modeled. The Swiss presence-only tree data produce models that are reliable enough to be useful in planning and management applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IMPORTANCE OF THE FIELD: Promising immunotherapeutic agents targeting co-stimulatory pathways are currently being tested in clinical trials. One player in this array of regulatory pathways is the LAG-3/MHC class II axis. The lymphocyte activation gene-3 (LAG-3) is a negative co-stimulatory receptor that modulates T cell homeostasis, proliferation and activation. A recombinant soluble dimeric form of LAG-3 (sLAG-3-Ig, IMP321) shows adjuvant properties and enhances immunogenicity of tumor vaccines. Recent clinical trials produced encouraging results, especially when the human dimeric soluble form of LAG-3 (hLAG-3-Ig) was used in combination with chemotherapy. AREAS COVERED IN THIS REVIEW: The biological relevance of LAG-3 in vivo. Pre-clinical data demonstrating adjuvant properties, as well as the improvement of tumor immunity by sLAG-3-Ig. Recent advances in the clinical development of the therapeutic reagent IMP321, hLAG-3-Ig, for cancer treatment. WHAT THE READER WILL GAIN: This review summarizes preclinical and clinical data on the biological functions of LAG-3. TAKE HOME MESSAGE: The LAG-3 inhibitory pathway is attracting attention, in the light of recent studies demonstrating its role in T cell unresponsiveness, and Treg function after chronic antigen stimulation. As a soluble recombinant dimer, the sLAG-3-Ig protein acts as an adjuvant for therapeutic induction of T cell responses, and may be beneficial to cancer patients when used in combination therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper advocates for the creation of a federated, hybrid database in the cloud, integrating law data from all available public sources in one single open access system - adding, in the process, relevant meta-data to the indexed documents, including the identification of social and semantic entities and the relationships between them, using linked open data techniques and standards such as RDF. Examples of potential benefits and applications of this approach are also provided, including, among others, experiences from of our previous research, in which data integration, graph databases and social and semantic networks analysis were used to identify power relations, litigation dynamics and cross-references patterns both intra and inter-institutionally, covering most of the World international economic courts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to generate drift curves from pesticide applications on coffee plants and to compare them with two European drift-prediction models. The used methodology is based on the ISO 22866 standard. The experimental design was a randomized complete block with ten replicates in a 2x20 split-plot arrangement. The evaluated factors were: two types of nozzles (hollow cone with and without air induction) and 20 parallel distances to the crop line outside of the target area, spaced at 2.5 m. Blotting papers were used as a target and placed in each of the evaluated distances. The spray solution was composed of water+rhodamine B fluorescent tracer at a concentration of 100 mg L-1, for detection by fluorimetry. A spray volume of 400 L ha-1 was applied using a hydropneumatic sprayer. The air-induction nozzle reduces the drift up to 20 m from the treated area. The application with the hollow cone nozzle results in 6.68% maximum drift in the nearest collector of the treated area. The German and Dutch models overestimate the drift at distances closest to the crop, although the Dutch model more closely approximates the drift curves generated by both spray nozzles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implementation of new techniques of imaging in the daily practice of the radiation oncologist is a major advance in these last 10 years. This allows optimizing the therapeutic intervals and locoregional control of the disease while limiting side effects. Among them, positron emission tomography (PET) offers an opportunity to the clinician to obtain data relative to the tumoral biological mechanisms, while benefiting from the morphological images of the computed tomography (CT) scan. Recently hybrid PET/CT has been developed and numerous studies aimed at optimizing its use in the planning, the evaluation of the treatment response and the prognostic value. The choice of the radiotracer (according to the type of cancer and to the studied biological mechanism) and the various methods of tumoral delineation, require a regular update to optimize the practices. We propose throughout this article, an exhaustive review of the published researches (and in process of publication) until December 2011, as user guide of PET/CT in all the aspects of the modern radiotherapy (from the diagnosis to the follow-up): biopsy guiding, optimization of treatment planning and dosimetry, evaluation of tumor response and prognostic value, follow-up and early detection of recurrence versus tumoral necrosis. In a didactic purpose, each of these aspects is approached by primary tumoral location, and illustrated with representative iconographic examples. The current contribution of PET/CT and its perspectives of development are described to offer to the radiation oncologist a clear and up to date reading in this expanding domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To improve the risk stratification of patients with rhabdomyosarcoma (RMS) through the use of clinical and molecular biologic data. PATIENTS AND METHODS: Two independent data sets of gene-expression profiling for 124 and 101 patients with RMS were used to derive prognostic gene signatures by using a meta-analysis. These and a previously published metagene signature were evaluated by using cross validation analyses. A combined clinical and molecular risk-stratification scheme that incorporated the PAX3/FOXO1 fusion gene status was derived from 287 patients with RMS and evaluated. RESULTS: We showed that our prognostic gene-expression signature and the one previously published performed well with reproducible and significant effects. However, their effect was reduced when cross validated or tested in independent data and did not add new prognostic information over the fusion gene status, which is simpler to assay. Among nonmetastatic patients, patients who were PAX3/FOXO1 positive had a significantly poorer outcome compared with both alveolar-negative and PAX7/FOXO1-positive patients. Furthermore, a new clinicomolecular risk score that incorporated fusion gene status (negative and PAX3/FOXO1 and PAX7/FOXO1 positive), Intergroup Rhabdomyosarcoma Study TNM stage, and age showed a significant increase in performance over the current risk-stratification scheme. CONCLUSION: Gene signatures can improve current stratification of patients with RMS but will require complex assays to be developed and extensive validation before clinical application. A significant majority of their prognostic value was encapsulated by the fusion gene status. A continuous risk score derived from the combination of clinical parameters with the presence or absence of PAX3/FOXO1 represents a robust approach to improving current risk-adapted therapy for RMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les échantillons biologiques ne s?arrangent pas toujours en objets ordonnés (cristaux 2D ou hélices) nécessaires pour la microscopie électronique ni en cristaux 3D parfaitement ordonnés pour la cristallographie rayons X alors que de nombreux spécimens sont tout simplement trop << gros D pour la spectroscopie NMR. C?est pour ces raisons que l?analyse de particules isolées par la cryo-microscopie électronique est devenue une technique de plus en plus importante pour déterminer la structure de macromolécules. Néanmoins, le faible rapport signal-sur-bruit ainsi que la forte sensibilité des échantillons biologiques natifs face au faisceau électronique restent deux parmi les facteurs limitant la résolution. La cryo-coloration négative est une technique récemment développée permettant l?observation des échantillons biologiques avec le microscope électronique. Ils sont observés à l?état vitrifié et à basse température, en présence d?un colorant (molybdate d?ammonium). Les avantages de la cryo-coloration négative sont étudiés dans ce travail. Les résultats obtenus révèlent que les problèmes majeurs peuvent êtres évités par l?utilisation de cette nouvelle technique. Les échantillons sont représentés fidèlement avec un SNR 10 fois plus important que dans le cas des échantillons dans l?eau. De plus, la comparaison de données obtenues après de multiples expositions montre que les dégâts liés au faisceau électronique sont réduits considérablement. D?autre part, les résultats exposés mettent en évidence que la technique est idéale pour l?analyse à haute résolution de macromolécules biologiques. La solution vitrifiée de molybdate d?ammonium entourant l?échantillon n?empêche pas l?accès à la structure interne de la protéine. Finalement, plusieurs exemples d?application démontrent les avantages de cette technique nouvellement développée.<br/><br/>Many biological specimens do not arrange themselves in ordered assemblies (tubular or flat 2D crystals) suitable for electron crystallography, nor in perfectly ordered 3D crystals for X-ray diffraction; many other are simply too large to be approached by NMR spectroscopy. Therefore, single-particles analysis has become a progressively more important technique for structural determination of large isolated macromolecules by cryo-electron microscopy. Nevertheless, the low signal-to-noise ratio and the high electron-beam sensitivity of biological samples remain two main resolution-limiting factors, when the specimens are observed in their native state. Cryo-negative staining is a recently developed technique that allows the study of biological samples with the electron microscope. The samples are observed at low temperature, in the vitrified state, but in presence of a stain (ammonium molybdate). In the present work, the advantages of this novel technique are investigated: it is shown that cryo-negative staining can generally overcome most of the problems encountered with cryo-electron microscopy of vitrified native suspension of biological particles. The specimens are faithfully represented with a 10-times higher SNR than in the case of unstained samples. Beam-damage is found to be considerably reduced by comparison of multiple-exposure series of both stained and unstained samples. The present report also demonstrates that cryo-negative staining is capable of high- resolution analysis of biological macromolecules. The vitrified stain solution surrounding the sample does not forbid the access to the interna1 features (ie. the secondary structure) of a protein. This finding is of direct interest for the structural biologist trying to combine electron microscopy and X-ray data. developed electron microscopy technique. Finally, several application examples demonstrate the advantages of this newly

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé: L'automatisation du séquençage et de l'annotation des génomes, ainsi que l'application à large échelle de méthodes de mesure de l'expression génique, génèrent une quantité phénoménale de données pour des organismes modèles tels que l'homme ou la souris. Dans ce déluge de données, il devient très difficile d'obtenir des informations spécifiques à un organisme ou à un gène, et une telle recherche aboutit fréquemment à des réponses fragmentées, voir incomplètes. La création d'une base de données capable de gérer et d'intégrer aussi bien les données génomiques que les données transcriptomiques peut grandement améliorer la vitesse de recherche ainsi que la qualité des résultats obtenus, en permettant une comparaison directe de mesures d'expression des gènes provenant d'expériences réalisées grâce à des techniques différentes. L'objectif principal de ce projet, appelé CleanEx, est de fournir un accès direct aux données d'expression publiques par le biais de noms de gènes officiels, et de représenter des données d'expression produites selon des protocoles différents de manière à faciliter une analyse générale et une comparaison entre plusieurs jeux de données. Une mise à jour cohérente et régulière de la nomenclature des gènes est assurée en associant chaque expérience d'expression de gène à un identificateur permanent de la séquence-cible, donnant une description physique de la population d'ARN visée par l'expérience. Ces identificateurs sont ensuite associés à intervalles réguliers aux catalogues, en constante évolution, des gènes d'organismes modèles. Cette procédure automatique de traçage se fonde en partie sur des ressources externes d'information génomique, telles que UniGene et RefSeq. La partie centrale de CleanEx consiste en un index de gènes établi de manière hebdomadaire et qui contient les liens à toutes les données publiques d'expression déjà incorporées au système. En outre, la base de données des séquences-cible fournit un lien sur le gène correspondant ainsi qu'un contrôle de qualité de ce lien pour différents types de ressources expérimentales, telles que des clones ou des sondes Affymetrix. Le système de recherche en ligne de CleanEx offre un accès aux entrées individuelles ainsi qu'à des outils d'analyse croisée de jeux de donnnées. Ces outils se sont avérés très efficaces dans le cadre de la comparaison de l'expression de gènes, ainsi que, dans une certaine mesure, dans la détection d'une variation de cette expression liée au phénomène d'épissage alternatif. Les fichiers et les outils de CleanEx sont accessibles en ligne (http://www.cleanex.isb-sib.ch/). Abstract: The automatic genome sequencing and annotation, as well as the large-scale gene expression measurements methods, generate a massive amount of data for model organisms. Searching for genespecific or organism-specific information througout all the different databases has become a very difficult task, and often results in fragmented and unrelated answers. The generation of a database which will federate and integrate genomic and transcriptomic data together will greatly improve the search speed as well as the quality of the results by allowing a direct comparison of expression results obtained by different techniques. The main goal of this project, called the CleanEx database, is thus to provide access to public gene expression data via unique gene names and to represent heterogeneous expression data produced by different technologies in a way that facilitates joint analysis and crossdataset comparisons. A consistent and uptodate gene nomenclature is achieved by associating each single gene expression experiment with a permanent target identifier consisting of a physical description of the targeted RNA population or the hybridization reagent used. These targets are then mapped at regular intervals to the growing and evolving catalogues of genes from model organisms, such as human and mouse. The completely automatic mapping procedure relies partly on external genome information resources such as UniGene and RefSeq. The central part of CleanEx is a weekly built gene index containing crossreferences to all public expression data already incorporated into the system. In addition, the expression target database of CleanEx provides gene mapping and quality control information for various types of experimental resources, such as cDNA clones or Affymetrix probe sets. The Affymetrix mapping files are accessible as text files, for further use in external applications, and as individual entries, via the webbased interfaces . The CleanEx webbased query interfaces offer access to individual entries via text string searches or quantitative expression criteria, as well as crossdataset analysis tools, and crosschip gene comparison. These tools have proven to be very efficient in expression data comparison and even, to a certain extent, in detection of differentially expressed splice variants. The CleanEx flat files and tools are available online at: http://www.cleanex.isbsib. ch/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Semantic Web (SW) research has resulted in significant outcomes. Various industries have adopted SW technologies, while the ‘deep web’ is still pursuing the critical transformation point, in which the majority of data found on the deep web will be exploited through SW value layers. In this article we analyse the SW applications from a ‘market’ perspective. We are setting the key requirements for real-world information systems that are SW-enabled and we discuss the major difficulties for the SW uptake that has been delayed. This article contributes to the literature of SW and knowledge management providing a context for discourse towards best practices on SW-based information systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evaluation of investments in advanced technology is one of the most important decision making tasks. The importance is even more pronounced considering the huge budget concerning the strategic, economic and analytic justification in order to shorten design and development time. Choosing the most appropriate technology requires an accurate and reliable system that can lead the decision makers to obtain such a complicated task. Currently, several Information and Communication Technologies (ICTs) manufacturers that design global products are seeking local firms to act as their sales and services representatives (called distributors) to the end user. At the same time, the end user or customer is also searching for the best possible deal for their investment in ICT's projects. Therefore, the objective of this research is to present a holistic decision support system to assist the decision maker in Small and Medium Enterprises (SMEs) - working either as individual decision makers or in a group - in the evaluation of the investment to become an ICT's distributor or an ICT's end user. The model is composed of the Delphi/MAH (Maximising Agreement Heuristic) Analysis, a well-known quantitative method in Group Support System (GSS), which is applied to gather the average ranking data from amongst Decision Makers (DMs). After that the Analytic Network Process (ANP) analysis is brought in to analyse holistically: it performs quantitative and qualitative analysis simultaneously. The illustrative data are obtained from industrial entrepreneurs by using the Group Support System (GSS) laboratory facilities at Lappeenranta University of Technology, Finland and in Thailand. The result of the research, which is currently implemented in Thailand, can provide benefits to the industry in the evaluation of becoming an ICT's distributor or an ICT's end user, particularly in the assessment of the Enterprise Resource Planning (ERP) programme. After the model is put to test with an in-depth collaboration with industrial entrepreneurs in Finland and Thailand, the sensitivity analysis is also performed to validate the robustness of the model. The contribution of this research is in developing a new approach and the Delphi/MAH software to obtain an analysis of the value of becoming an ERP distributor or end user that is flexible and applicable to entrepreneurs, who are looking for the most appropriate investment to become an ERP distributor or end user. The main advantage of this research over others is that the model can deliver the value of becoming an ERP distributor or end user in a single number which makes it easier for DMs to choose the most appropriate ERP vendor. The associated advantage is that the model can include qualitative data as well as quantitative data, as the results from using quantitative data alone can be misleading and inadequate. There is a need to utilise quantitative and qualitative analysis together, as can be seen from the case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Within Data Envelopment Analysis, several alternative models allow for an environmental adjustment. The majority of them deliver divergent results. Decision makers face the difficult task of selecting the most suitable model. This study is performed to overcome this difficulty. By doing so, it fills a research gap. First, a two-step web-based survey is conducted. It aims (1) to identify the selection criteria, (2) to prioritize and weight the selection criteria with respect to the goal of selecting the most suitable model and (3) to collect the preferences about which model is preferable to fulfil each selection criterion. Second, Analytic Hierarchy Process is used to quantify the preferences expressed in the survey. Results show that the understandability, the applicability and the acceptability of the alternative models are valid selection criteria. The selection of the most suitable model depends on the preferences of the decision makers with regards to these criteria.