156 resultados para Initial efficiency
Resumo:
Purpose: Cervical foraminal injection performed with a direct approach of the foramen may induce serious neurologic complications. Cervical facet joint (CFJ) injections are easier to perform and safe, and may diffuse in the epidural and foraminal spaces. We analyzed the efficiency and tolerance of CT-guided CFJ slow-acting corticosteroid injection in patients with radiculopathy related to disc herniation. Methods and materials: Pilot study included 17 patients presenting typical cervical radiculopathy related to disc herniation without relief of pain after medical treatment (one month duration). CFJ puncture was performed under CT guidance with a lateral approach. CT control of the CFJ opacification was performed after injections of contrast agent (1 ml), followed by slow-acting corticosteroid (25 mg). Main criteria for judgment was pain relief one month later (delta visual analogical scale VAS for 0 to 100 mm). Diffusion of iodinated contrast agent in the foramen was assessed by two radiologists in consensus. Results: Pain relief was significant at one month (delta VAS 22 ± 23 mm, p = 0.001) and 41% (7/17) of patients had pain relief more than 50%. In cases with foraminal diffusion, pain relief more than 50% occured in 5 patients (50%) and only in 2 patients (29%) in cases without foraminal diffusion. No complication occurred. Conclusion: CT-guided CFJ slow-acting corticosteroid injection is safe and provided good results at one month follow-up. It may be considered as an interesting percutaneous treatment in patients suffering from cervical radicular pain related to disc herniation.
Resumo:
Early reperfusion with prompt re-establishment of coronary blood flow improves survival in patients suffering from acute ST-elevation myocardial infarction (STEMI). Leaving systemic thrombolysis for primary percutaneous coronary intervention (PCI) is justified by clinical results in favor of PCI. Nevertheless, primary PCI necessitates additional transfer time and requires an efficient territorial networking. The present article summarizes the up-to-dated management of patients with acute STEMI and/or overt cardiogenic shock.
Resumo:
Superantigens (SAgs) encoded by infectious mouse mammary tumor viruses (MMTVs) play a crucial role in the viral life cycle. Their expression by infected B cells induces a proliferative immune response by SAg-reactive T cells which amplifies MMTV infection. This response most likely ensures stable MMTV infection and transmission to the mammary gland. Since T cell reactivity to SAgs from endogenous Mtv loci depends on MHC class II molecules expressed by B cells, we have determined the ability of MMTV to infect various MHC congenic mice. We show that MHC class II I-E+ compared with I-E- mouse strains show higher levels of MMTV infection, most likely due to their ability to induce a vigorous SAg-dependent immune response following MMTV encounter. Inefficient infection is observed in MHC class II I-E- mice, which have been shown to present endogenous SAgs poorly. Therefore, during MMTV infection the differential ability of MHC class II molecules to form a functional complex with SAg determines the magnitude of the proliferative response of SAg-reactive T cells. This in turn influences the degree of T cell help provided to infected B cells and therefore the efficiency of amplification of MMTV infection.
Resumo:
BACKGROUND: The risk of catheter-related infection or bacteremia, with initial and extended use of femoral versus nonfemoral sites for double-lumen vascular catheters (DLVCs) during continuous renal replacement therapy (CRRT), is unclear. STUDY DESIGN: Retrospective observational cohort study. SETTING & PARTICIPANTS: Critically ill patients on CRRT in a combined intensive care unit of a tertiary institution. FACTOR: Femoral versus nonfemoral venous DLVC placement. OUTCOMES: Catheter-related colonization (CRCOL) and bloodstream infection (CRBSI). MEASUREMENTS: CRCOL/CRBSI rates expressed per 1,000 catheter-days. RESULTS: We studied 458 patients (median age, 65 years; 60% males) and 647 DLVCs. Of 405 single-site only DLVC users, 82% versus 18% received exclusively 419 femoral versus 82 jugular or subclavian DLVCs, respectively. The corresponding DLVC indwelling duration was 6±4 versus 7±5 days (P=0.03). Corresponding CRCOL and CRBSI rates (per 1,000 catheter-days) were 9.7 versus 8.8 events (P=0.8) and 1.2 versus 3.5 events (P=0.3), respectively. Overall, 96 patients with extended CRRT received femoral-site insertion first with subsequent site change, including 53 femoral guidewire exchanges, 53 new femoral venipunctures, and 47 new jugular/subclavian sites. CRCOL and CRBSI rates were similar for all such approaches (P=0.7 and P=0.9, respectively). On multivariate analysis, CRCOL risk was higher in patients older than 65 years and weighing >90kg (ORs of 2.1 and 2.2, respectively; P<0.05). This association between higher weight and greater CRCOL risk was significant for femoral DLVCs, but not for nonfemoral sites. Other covariates, including initial or specific DLVC site, guidewire exchange versus new venipuncture, and primary versus secondary DLVC placement, did not significantly affect CRCOL rates. LIMITATIONS: Nonrandomized retrospective design and single-center evaluation. CONCLUSIONS: CRCOL and CRBSI rates in patients on CRRT are low and not influenced significantly by initial or serial femoral catheterizations with guidewire exchange or new venipuncture. CRCOL risk is higher in older and heavier patients, the latter especially so with femoral sites.
Resumo:
A 7-year-old right-handed girl developed partial complex seizures with a left-sided onset. A brief period of post-ictal aphasia of the conduction type was documented before seizure control and complete normalization of oral language were obtained. We also found that she had a history of previous unexplained difficulty with written language acquisition that had occurred prior to the clinically recognized epilepsy and a subsequent loss of this ability. This rapidly improved with control of the epilepsy. The evolution of written language were been followed for 3 years, and continued improvement has occurred with fluctuations related to her epilepsy. This observation adds support to the growing body of data indicating that specific cognitive disturbances can be due to epilepsy in young children. It shows the vulnerability of skills which are in a period of active development, and the possibility that oral/written language can be differentially involved by cerebral dysfunction in the young child.
Resumo:
Schizophrenia is postulated to be the prototypical dysconnection disorder, in which hallucinations are the core symptom. Due to high heterogeneity in methodology across studies and the clinical phenotype, it remains unclear whether the structural brain dysconnection is global or focal and if clinical symptoms result from this dysconnection. In the present work, we attempt to clarify this issue by studying a population considered as a homogeneous genetic sub-type of schizophrenia, namely the 22q11.2 deletion syndrome (22q11.2DS). Cerebral MRIs were acquired for 46 patients and 48 age and gender matched controls (aged 6-26, respectively mean age = 15.20 ± 4.53 and 15.28 ± 4.35 years old). Using the Connectome mapper pipeline (connectomics.org) that combines structural and diffusion MRI, we created a whole brain network for each individual. Graph theory was used to quantify the global and local properties of the brain network organization for each participant. A global degree loss of 6% was found in patients' networks along with an increased Characteristic Path Length. After identifying and comparing hubs, a significant loss of degree in patients' hubs was found in 58% of the hubs. Based on Allen's brain network model for hallucinations, we explored the association between local efficiency and symptom severity. Negative correlations were found in the Broca's area (p < 0.004), the Wernicke area (p < 0.023) and a positive correlation was found in the dorsolateral prefrontal cortex (DLPFC) (p < 0.014). In line with the dysconnection findings in schizophrenia, our results provide preliminary evidence for a targeted alteration in the brain network hubs' organization in individuals with a genetic risk for schizophrenia. The study of specific disorganization in language, speech and thought regulation networks sharing similar network properties may help to understand their role in the hallucination mechanism.
Resumo:
PURPOSE OF THE STUDY: This prospective study reports our preliminary results with local anaesthesia (LA) for carotid endarterectomy (CEA). MATERIAL AND METHODS: Twenty CEA in nineteen patients were performed using a three-stage local infiltration technique. CEA were performed through a short Duplex-assisted skin incision (median length: 55 mm) using a retro-jugular approach and polyurethane patch closure (median length: 35 mm). RESULTS: There were 13 men and 6 women with a mean age of 71.2 years. The indications of CEA were asymptomatic lesions in 11 cases, stroke in 7 cases and transient ischaemic attack in 2 cases. The median degree of internal carotid artery stenosis was 90%. One patient (5%) required an intraluminal shunt. There were no peri-operative deaths, stroke or conversion to general anaesthesia (GA). The median length of stay was 3 days. CONCLUSIONS: LA is a good alternative to GA. It can be used after a feasibility study and a short teaching procedure. In our centre, it is a safe and effective procedure associated with low morbidity, high acceptance by patients and a short hospital stay.
Resumo:
Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.
Resumo:
Because data on rare species usually are sparse, it is important to have efficient ways to sample additional data. Traditional sampling approaches are of limited value for rare species because a very large proportion of randomly chosen sampling sites are unlikely to shelter the species. For these species, spatial predictions from niche-based distribution models can be used to stratify the sampling and increase sampling efficiency. New data sampled are then used to improve the initial model. Applying this approach repeatedly is an adaptive process that may allow increasing the number of new occurrences found. We illustrate the approach with a case study of a rare and endangered plant species in Switzerland and a simulation experiment. Our field survey confirmed that the method helps in the discovery of new populations of the target species in remote areas where the predicted habitat suitability is high. In our simulations the model-based approach provided a significant improvement (by a factor of 1.8 to 4 times, depending on the measure) over simple random sampling. In terms of cost this approach may save up to 70% of the time spent in the field.
Resumo:
A HeLa cell nuclear transcription extract that is approximately 20 times more efficient than standard HeLa cell transcription extracts was developed. Transcription of the strong adenovirus II major late promoter by this extract results in the synthesis of 1.5-4 molecules of product RNA per molecule of template, indicating that the extract is capable of multiple rounds of initiation. Standard HeLa cell nuclear extracts transcribe closed circular and linear adenovirus major late promoter templates with equal efficiency. In contrast, the new extract exhibits an increase of approximately twofold on transcription of a closed circular, as opposed to a linear, major late promoter template.
Resumo:
PURPOSE: Visualization of coronary blood flow in the right and left coronary system in volunteers and patients by means of a modified inversion-prepared bright-blood coronary magnetic resonance angiography (cMRA) sequence. MATERIALS AND METHODS: cMRA was performed in 14 healthy volunteers and 19 patients on a 1.5 Tesla MR system using a free-breathing 3D balanced turbo field echo (b-TFE) sequence with radial k-space sampling. For magnetization preparation a slab selective and a 2D selective inversion pulse were used for the right and left coronary system, respectively. cMRA images were evaluated in terms of clinically relevant stenoses (< 50 %) and compared to conventional catheter angiography. Signal was measured in the coronary arteries (coro), the aorta (ao) and in the epicardial fat (fat) to determine SNR and CNR. In addition, maximal visible vessel length, and vessel border definition were analyzed. RESULTS: The use of a selective inversion pre-pulse allowed direct visualization of the coronary blood flow in the right and left coronary system. The measured SNR and CNR, vessel length, and vessel sharpness in volunteers (SNR coro: 28.3 +/- 5.0; SNR ao: 37.6 +/- 8.4; CNR coro-fat: 25.3 +/- 4.5; LAD: 128.0 cm +/- 8.8; RCA: 74.6 cm +/- 12.4; Sharpness: 66.6 % +/- 4.8) were slightly increased compared to those in patients (SNR coro: 24.1 +/- 3.8; SNR ao: 33.8 +/- 11.4; CNR coro-fat: 19.9 +/- 3.3; LAD: 112.5 cm +/- 13.8; RCA: 69.6 cm +/- 16.6; Sharpness: 58.9 % +/- 7.9; n.s.). In the patient study the assessment of 42 coronary segments lead to correct identification of 10 clinically relevant stenoses. CONCLUSION: The modification of a previously published inversion-prepared cMRA sequence allowed direct visualization of the coronary blood flow in the right as well as in the left coronary system. In addition, this sequence proved to be highly sensitive regarding the assessment of clinically relevant stenotic lesions.
Resumo:
Purpose: To examine the efficacy and safety of repeat deep sclerectomy (DS) versus Baerveldt shunt (BS) implantation as second line surgery following failed primary DS. Methods: Fifty one patients were prospectively recruited to undergo BS implantation following failed DS and 51 patients underwent repeat DS, for which data was collected retrospectively. All eyes had at least one failed DS. Surgical success was defined as IOP≤21mmHg and 20% reduction in IOP from baseline. Success rates, number of glaucoma medications (GMs), IOP, and complication rates were compared between the two groups at year 1, post-operatively. Results: Mean age, sex and the proportion of glaucoma subtypes were similar between groups. Preoperatively IOP was significantly lower in DS group vs BS group (18.8mmHg vs 23.8mmHg, p<0.01, two sample t-test). Postoperatively IOP was significantly higher in DS group than BS group (14.6mmHg vs 12.0mmHg, p<0.01, two-sample t-test). In the DS group, 47% of eyes did not achieve 20% reduction in IOP from baseline, as a result the success rates were significantly lower in eyes with DS (51%) than in eyes with BS (88%) (p=0.02, log-rank test). Preoperatively the number of GMs used in DS and BS groups were similar (2.2 vs 2.7 p=0.02, two sample t-test). Postoperatively there remained no significant difference in GMs between groups (0.9 vs 1.1, p= 0.58, two sample t-test). Complication rates were similar between the two groups (12% vs 10%). Conclusions: Baerveldt tube implantation was more effective in lowering IOP than repeat deep sclerectomy in eyes with failed primary DS, at year one. Complications were minor and infrequent in both groups
Resumo:
Photons participate in many atomic and molecular interactions and changes. Recent biophysical research has shown the induction of ultraweak photons in biological tissue. It is now established that plants, animal and human cells emit a very weak radiation which can be readily detected with an appropriate photomultiplier system. Although the emission is extremely low in mammalian cells, it can be efficiently induced by ultraviolet light. In our studies, we used the differentiation system of human skin fibroblasts from a patient with Xeroderma Pigmentosum of complementation group A in order to test the growth stimulation efficiency of various bone growth factors at concentrations as low as 5 ng/ml of cell culture medium. In additional experiments, the cells were irradiated with a moderate fluence of ultraviolet A. The different batches of growth factors showed various proliferation of skin fibroblasts in culture which could be correlated with the ultraweak photon emission. The growth factors reduced the acceleration of the fibroblast differentiation induced by mitomycin C by a factor of 10-30%. In view that fibroblasts play an essential role in skin aging and wound healing, the fibroblast differentiation system is a very useful tool in order to elucidate the efficacy of growth factors.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.