88 resultados para sistemi integrati, CAT tools, machine translation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic environmental monitoring networks enforced by wireless communication technologies provide large and ever increasing volumes of data nowadays. The use of this information in natural hazard research is an important issue. Particularly useful for risk assessment and decision making are the spatial maps of hazard-related parameters produced from point observations and available auxiliary information. The purpose of this article is to present and explore the appropriate tools to process large amounts of available data and produce predictions at fine spatial scales. These are the algorithms of machine learning, which are aimed at non-parametric robust modelling of non-linear dependencies from empirical data. The computational efficiency of the data-driven methods allows producing the prediction maps in real time which makes them superior to physical models for the operational use in risk assessment and mitigation. Particularly, this situation encounters in spatial prediction of climatic variables (topo-climatic mapping). In complex topographies of the mountainous regions, the meteorological processes are highly influenced by the relief. The article shows how these relations, possibly regionalized and non-linear, can be modelled from data using the information from digital elevation models. The particular illustration of the developed methodology concerns the mapping of temperatures (including the situations of Föhn and temperature inversion) given the measurements taken from the Swiss meteorological monitoring network. The range of the methods used in the study includes data-driven feature selection, support vector algorithms and artificial neural networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The purpose of this study was to adapt and improve a minimally invasive two-step postmortem angiographic technique for use on human cadavers. Detailed mapping of the entire vascular system is almost impossible with conventional autopsy tools. The technique described should be valuable in the diagnosis of vascular abnormalities. MATERIALS AND METHODS: Postmortem perfusion with an oily liquid is established with a circulation machine. An oily contrast agent is introduced as a bolus injection, and radiographic imaging is performed. In this pilot study, the upper or lower extremities of four human cadavers were perfused. In two cases, the vascular system of a lower extremity was visualized with anterograde perfusion of the arteries. In the other two cases, in which the suspected cause of death was drug intoxication, the veins of an upper extremity were visualized with retrograde perfusion of the venous system. RESULTS: In each case, the vascular system was visualized up to the level of the small supplying and draining vessels. In three of the four cases, vascular abnormalities were found. In one instance, a venous injection mark engendered by the self-administration of drugs was rendered visible by exudation of the contrast agent. In the other two cases, occlusion of the arteries and veins was apparent. CONCLUSION: The method described is readily applicable to human cadavers. After establishment of postmortem perfusion with paraffin oil and injection of the oily contrast agent, the vascular system can be investigated in detail and vascular abnormalities rendered visible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM: People suffering from mental illness are exposed to stigma. However, only few tools are available to assess stigmatization as perceived from the patient's perspective. The aim of this study is to adapt and validate a French version of the Stigma Scale (King et al., 2007 [8]). This self-report questionnaire has a three-factor structure: discrimination, disclosure and positive aspects of mental illness. Discrimination subscale refers to perceived negative reactions of others. Disclosure subscale refers mainly to managing disclosure to avoid discrimination and finally positive aspects subscale taps into how patients are becoming more accepting, more understanding toward their illness. METHOD: In the first step, internal consistency, convergent validity and test-retest reliability of the French adaptation of the 28-item scale were assessed in a sample of 183 patients. Results of confirmatory factor analyses (CFA) did not confirm the hypothesized structure. In the light of the failed attempts to validate the original version, an alternative 9-item short-form version of the Stigma Scale, maintaining the integrity of the original model, was developed based on results of exploratory factor analyses in the first sample and cross-validated in a new sample of 234 patients. RESULTS: Results of CFA did not confirm that the data fitted well to the three-factor model of the 28-item Stigma Scale (χ(2)/df=2.02, GFI=0.77, AGFI=0.73, RMSEA=0.07, CFI=0.77 and NNFI=0.75). Cronbach's α was excellent for discrimination (0.84) and disclosure (0.83) subscales but poor for potential positive aspects (0.46). External validity was satisfactory. Overall Stigma Scale total score was negatively correlated with the score on Rosenberg's Self-Esteem Scale (r=-0.49), and each subscale was significantly correlated with a visual analogue scale that referred to the specific aspect of stigma (0.43≤|r|≤0.60). Intraclass correlation coefficients between 0.68 and 0.89 indicated good test-retest reliability. The results of the CFA demonstrated that the items chosen for the short version of the Stigma Scale have the expected fit properties (χ(2)/df=1.02, GFI=0.98, AGFI=0.98, RMSEA=0.01, CFI=1.0 and NNFI=1.0). Considering the small number (three) of items in each subscale of the short version of the Stigma Scale, α coefficients for discrimination (0.57), disclosure (0.80) and potential positive aspects subscales (0.62) are considered as good. CONCLUSION: Our results suggest that the 9-item French short version of the Stigma Scale is a useful, reliable and valid self-report questionnaire to assess perceived stigmatization in people suffering from mental illness. The time of completion is really short and questions are well understood and accepted by the patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The WOSI (Western Ontario Shoulder Instability Index) is a self-administered quality of life questionnaire designed to be used as a primary outcome measure in clinical trials on shoulder instability, as well as to measure the effect of an intervention on any particular patient. It is validated and is reliable and sensitive. As it is designed to measure subjective outcome, it is important that translation should be methodologically rigorous, as it is subject to both linguistic and cultural interpretation. OBJECTIVE: To produce a French language version of the WOSI that is culturally adapted to both European and North American French-speaking populations. MATERIALS AND METHODS: A validated protocol was used to create a French language WOSI questionnaire (WOSI-Fr) that would be culturally acceptable for both European and North American French-speaking populations. Reliability and responsiveness analyses were carried out, and the WOSI-Fr was compared to the F-QuickDASH-D/S (Disability of the Arm, Shoulder and Hand-French translation), and Walch-Duplay scores. RESULTS: A French language version of the WOSI (WOSI-Fr) was accepted by a multinational committee. The WOSI-Fr was then validated using a total of 144 native French-speaking subjects from Canada and Switzerland. Comparison of results on two WOSI-Fr questionnaires completed at a mean interval of 16 days showed that the WOSI-Fr had strong reliability, with a Pearson and interclass correlation of r=0.85 (P=0.01) and ICC=0.84 [95% CI=0.78-0.88]. Responsiveness, at a mean 378.9 days after surgical intervention, showed strong correlation with that of the F-QuickDASH-D/S, with r=0.67 (P<0.01). Moreover, a standardized response means analysis to calculate effect size for both the WOSI-Fr and the F-QuickDASH-D/S showed that the WOSI-Fr had a significantly greater ability to detect change (SRM 1.55 versus 0.87 for the WOSI-Fr and F-QuickDASH-D/S respectively, P<0.01). The WOSI-Fr showed fair correlation with the Walch-Duplay. DISCUSSION: A French-language translation of the WOSI questionnaire was created and validated for use in both Canadian and Swiss French-speaking populations. This questionnaire will facilitate outcome assessment in French-speaking settings, collaboration in multinational studies and comparison between studies performed in different countries. TYPE OF STUDY: Multicenter cohort study. LEVEL OF EVIDENCE: II.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To what extent do and could e-tools contribute to a democracy like Switzerland? This paper puts forward experiences and visions concerning the application of e-tools for the most traditional democratic processes- elections and, of special importance in Switzerland, direct-democratic votes.Having the particular voting behaviour of the Swiss electorate in mind (low voter turnout - especially among the youngest age group, low political knowledge, etc.) we believe that e-tools which provide information in the forefront of elections or direct-democratic votes offer an enormous service to the voter. As soon as e-voting will be possible in Switzerland (as planned by the government), those e-tools for gathering information online will become indispensable and will gain power enormously. Therefore political scientists should not only focus on potential effects of e-voting itself but rather on the combination of (connected)e-tools of the pre-voting and the voting sphere. In the case of Switzerland, we argue in this paper, the offer of VAAs such as smartvote for elections and direct-democratic votes can provide the voter with more balanced and qualitatively higher information and thereby make a valuable contribution to the Swiss democracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bone defects in revision knee arthroplasty are often located in load-bearing regions. The goal of this study was to determine whether a physiologic load could be used as an in situ osteogenic signal to the scaffolds filling the bone defects. In order to answer this question, we proposed a novel translation procedure having four steps: (1) determining the mechanical stimulus using finite element method, (2) designing an animal study to measure bone formation spatially and temporally using micro-CT imaging in the scaffold subjected to the estimated mechanical stimulus, (3) identifying bone formation parameters for the loaded and non-loaded cases appearing in a recently developed mathematical model for bone formation in the scaffold and (4) estimating the stiffness and the bone formation in the bone-scaffold construct. With this procedure, we estimated that after 3 years mechanical stimulation increases the bone volume fraction and the stiffness of scaffold by 1.5- and 2.7-fold, respectively, compared to a non-loaded situation.