19 resultados para Software Design Pattern
em Université de Lausanne, Switzerland
Resumo:
In recent years, Business Model Canvas design has evolved from being a paper-based activity to one that involves the use of dedicated computer-aided business model design tools. We propose a set of guidelines to help design more coherent business models. When combined with functionalities offered by CAD tools, they show great potential to improve business model design as an ongoing activity. However, in order to create complex solutions, it is necessary to compare basic business model design tasks, using a CAD system over its paper-based counterpart. To this end, we carried out an experiment to measure user perceptions of both solutions. Performance was evaluated by applying our guidelines to both solutions and then carrying out a comparison of business model designs. Although CAD did not outperform paper-based design, the results are very encouraging for the future of computer-aided business model design.
Resumo:
The prediction of binding modes (BMs) occurring between a small molecule and a target protein of biological interest has become of great importance for drug development. The overwhelming diversity of needs leaves room for docking approaches addressing specific problems. Nowadays, the universe of docking software ranges from fast and user friendly programs to algorithmically flexible and accurate approaches. EADock2 is an example of the latter. Its multiobjective scoring function was designed around the CHARMM22 force field and the FACTS solvation model. However, the major drawback of such a software design lies in its computational cost. EADock dihedral space sampling (DSS) is built on the most efficient features of EADock2, namely its hybrid sampling engine and multiobjective scoring function. Its performance is equivalent to that of EADock2 for drug-like ligands, while the CPU time required has been reduced by several orders of magnitude. This huge improvement was achieved through a combination of several innovative features including an automatic bias of the sampling toward putative binding sites, and a very efficient tree-based DSS algorithm. When the top-scoring prediction is considered, 57% of BMs of a test set of 251 complexes were reproduced within 2 Å RMSD to the crystal structure. Up to 70% were reproduced when considering the five top scoring predictions. The success rate is lower in cross-docking assays but remains comparable with that of the latest version of AutoDock that accounts for the protein flexibility. © 2011 Wiley Periodicals, Inc. J Comput Chem, 2011.
Resumo:
We have devised a program that allows computation of the power of F-test, and hence determination of appropriate sample and subsample sizes, in the context of the one-way hierarchical analysis of variance with fixed effects. The power at a fixed alternative is an increasing function of the sample size and of the subsample size. The program makes it easy to obtain the power of F-test for a range of values of sample and subsample sizes, and therefore the appropriate sizes based on a desired power. The program can be used for the 'ordinary' case of the one-way analysis of variance, as well as for hierarchical analysis of variance with two stages of sampling. Examples are given of the practical use of the program.
Resumo:
BACKGROUND: The pattern of substrate utilization with diets containing a high or a low proportion of unavailable and slowly digestible carbohydrates may constitute an important factor in the control, time course, and onset of hunger in humans. OBJECTIVE: We tested the hypothesis that isoenergetic diets differing only in their content of unavailable carbohydrates would result in different time courses of total, endogenous, and exogenous carbohydrate oxidation rates. DESIGN: Two diets with either a high (H diet) or a low (L diet) content of unavailable carbohydrates were fed to 14 healthy subjects studied during two 24-h periods in a metabolic chamber. Substrate utilization was assessed by whole-body indirect calorimetry. In a subgroup of 8 subjects, endogenous and exogenous carbohydrate oxidation were assessed by prelabeling the body glycogen stores with [(13)C]carbohydrate. Subjective feelings of hunger were estimated with use of visual analogue scales. RESULTS: Total energy expenditure and substrate oxidation did not differ significantly between the 2 diets. However, there was a significant effect of diet (P: = 0.03) on the carbohydrate oxidation pattern: the H diet elicited a lower and delayed rise of postprandial carbohydrate oxidation and was associated with lower hunger feelings than was the L diet. The differences in hunger scores between the 2 diets were significantly associated with the differences in the pattern of carbohydrate oxidation among diets (r = -0.67, P: = 0. 006). Exogenous and endogenous carbohydrate oxidation were not significantly influenced by diet. CONCLUSIONS: The pattern of carbohydrate utilization is involved in the modulation of hunger feelings. The greater suppression of hunger after the H diet than after the L diet may be helpful, at least over the short term, in individuals attempting to better control their food intake.
Resumo:
Introduction: Population ageing is a worldwide phenomenon that forces us to make radical changes on multiple levels of society. So far, studies have concluded that the health, both physical and mental, of prisoners in general and older prisoners in particular is worse than that of the general population. Prisoners are reported to age faster as compared to adults in the community. However, to date, very little is known about the actual healthcare conditions of older prisoners and almost no substantial knowledge is available concerning their patterns of healthcare use. Method: A quantitative study was conducted in four prisons for male prisoners in Switzerland, including two open and two closed prisons situated in different cantons. In this study, medical records of older prisoners (50+) were obtained from the respective authority upon consent and total anonymity was ensured. Data gathered from all available medical records included basic demographic information, education and prison sentencing. Healthcare data obtained were extensive in nature encompassing data related to illness types, number of visits to different health care providers and hospitals. The corresponding reasons for visits and outcomes of these visits were extracted. All data are analysed using statistical software SPSS 20.0. Results: Data were extracted for a total of 50 older prisoners living in Switzerland. The chosen prisons are located in German-speaking cantons. Preliminary results show that the age average was 56 years. For more than half, this was their first imprisonment. Nevertheless, a third of them were sentenced to measures (Art. 64 Swiss Criminal Code) which means that the length of the detention is indefinite and while release is possible it is in most cases not very likely. This entails that these prisoners will grow old in prison and some will even spend their remaining years there. Concerning their health, a third of the sample reported respiratory and cardiovascular illnesses and half reported suffering from some form of musculoskeletal related pain. Older prisoners were prescribed on average only 3.5 medications, which is significantly fewer than the number of medication prescribed to younger prisoners, whose data were also sampled. Conclusion: Access to healthcare is a right given to all prisoners through the principle of equivalence which is generally exercised in Switzerland. Prisoners growing old in prison will represent a challenge for prison health care services.
Resumo:
Protein-protein interactions encode the wiring diagram of cellular signaling pathways and their deregulations underlie a variety of diseases, such as cancer. Inhibiting protein-protein interactions with peptide derivatives is a promising way to develop new biological and therapeutic tools. Here, we develop a general framework to computationally handle hundreds of non-natural amino acid sidechains and predict the effect of inserting them into peptides or proteins. We first generate all structural files (pdb and mol2), as well as parameters and topologies for standard molecular mechanics software (CHARMM and Gromacs). Accurate predictions of rotamer probabilities are provided using a novel combined knowledge and physics based strategy. Non-natural sidechains are useful to increase peptide ligand binding affinity. Our results obtained on non-natural mutants of a BCL9 peptide targeting beta-catenin show very good correlation between predicted and experimental binding free-energies, indicating that such predictions can be used to design new inhibitors. Data generated in this work, as well as PyMOL and UCSF Chimera plug-ins for user-friendly visualization of non-natural sidechains, are all available at http://www.swisssidechain.ch. Our results enable researchers to rapidly and efficiently work with hundreds of non-natural sidechains.
Resumo:
To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
OBJECTIVE: Glycodelin (PP14) is produced by the epithelium of the endometrium and its determination in the serum is used for functional evaluation of this tissue. Given the complex regulation and the combined contraceptive and immunosuppressive roles of glycodelin, the current lack of normal values for its serum concentration in the physiological menstrual cycle, derived from a large sample number, is a problem. We have therefore established reference values from over 600 sera. DESIGN: Retrospective study using banked serum samples. SETTING: University hospital. METHODS: Measurement of blood samples daily or every second day during one full cycle. MAIN OUTCOME MEASURES: Serum concentrations of glycodelin and normal values for every such one- or two-day interval were calculated. Late luteal phase glycodelin levels were compared with ovarian hormones. Follicular phase levels were compared with stimulated cycles from patients undergoing in vitro fertilization. RESULTS: Glycodelin concentrations were low around ovulation. Highest levels were observed at the end of the luteal phase; the glycodelin serum peak was reached 6-8 days after the one for progesterone. Late luteal glycodelin levels correlated negatively with the body mass index and positively with the progesterone level earlier in the secretory (mid-luteal) phase in the same woman. No associations with other ovarian hormones were observed. Follicular phase glycodelin levels were higher in the spontaneous than in the in vitro fertilization cycles. CONCLUSIONS: Normal values taken at two- or one-day intervals demonstrate the very late appearance of high serum glycodelin levels during the physiological menstrual cycle and their correlation with progesterone occurring earlier in the cycle.
Resumo:
In questo inizio di secolo, la nuova organizzazione sociale del lavoro pone una serie di domande e di sfide ai ricercatori che intendono aiutare le persone a progettare la loro vita lavorativa. Nell'era della globalizzazione anche per quanto riguarda il career counseling, abbiamo deciso di affrontare queste problematiche e di dare delle risposte che si caratterizzassero come innovative attraverso la costituzione di un forum di discussione internazionale. Si è proceduto in questo modo per evitare le difficoltà che generalmente si incontrano quando si mettono a punto dei modelli e dei metodi in un Paese e poi si esportano in altre culture con l'intento di adattarli alle stesse. Questo articolo presenta i primi risultati della collaborazione che si è registrata - un modello e dei metodi per la consulenza di orientamento. Il modello per l'intervento nell'ambito dell'approccio Life Design si caratterizza per cinque presupposti relativi al modo di vedere le persone e la loro vita lavorativa. Essi riguardano le possibilità contestuali, i processi dinamici, i progressi non-lineari, le molteplici prospettive e la presenza di pattern personali. Partendo da questi cinque presupposti abbiamo messo a punto un modello basato sull'epistemologia del costruzionismo sociale, che sostiene che la conoscenza e l'identità di un individuo sono i prodotti dell'interazione sociale e che il significato è costruito attraverso il discorso. Questo approccio fa riferimento anche alla teoria della costruzione di sé di Guichard (2005) e della costruzione della vita professionale di Savickas (2005), che descrivono le azioni utili a facilitare la progettazione del proprio futuro. Particolare attenzione viene data agli interventi nel corso dell'arco di vita, olistici, contestuali e preventivi.
Resumo:
Background: The Valais's cancer registry (RVsT) of the Observatoire valaisan de le santé (OVS) and the department of oncology of Valais's Hospital conducted a study on the epidemiology and pattern of care of colorectal cancer in Valais. Colorectal cancer is the third cause of death by cancer in Switzerland with about 1600 deaths per year. It is the third most frequent cancer for males and the second most frequent for females in Valais. The number of new colorectal cancer cases (average per year) increased between 1989 and 2009 for males as well as for females in Valais. The number of colorectal cancer death cases (average per year) slightly increased between 1989 and 2009 for males as well as for females in Valais. Age-standardized rates of incidence were stable for males and females in Valais and in Switzerland between 1989 and 2009, while age-standardized rates of mortality decreased for males and females in Valais and Switzerland. Results: 774 cases were recorded (59% males). Median age at diagnosis was 70 years old. Most of cancers were invasive (79%) and the main localization was the colon (71%). The most frequent mode of detection was a consultation for non emergency symptoms (75%), but almost 10% of patients consulted in emergency. 82% of patients were treated within 30 days from diagnosis. 90% of the patients were treated by surgery alone or with combined treatment. The first treatment was surgery, including endoscopic resection in 86% of the cases. The treatment was different according to the localization and the stage of the cancer. Survival rate was 95% at 30 days and 79% at one year. The survival was dependent on the stage and the age at diagnosis. Cox model shows an association between mortality and age (better survival for young people) and between mortality and stage (better survival for the lower stages). Methods: RVsT collects information on all cancer cases since 1989 for people registered in the communes of Valais. RVsT has an authorization to collect non anonymized data. All new incident cancers are coded according to the International Classification of Diseases for Oncology (ICD-O-3) and the stages are coded according to the TNM classification. We studied all cases of in situ and invasive colorectal cancers diagnosed between 2006 and 2009 and registered routinely at the RVsT. We checked for data completeness and if necessary sent questionnaires to avoid missing data. A distance of 15 cm has been chosen to delimitate the colon (sigmoid) and the rectal cancers. We made an active follow-up for vital status to have a valid survival analysis. We analyzed the characteristics of the tumors according to age, sex, localization and stage with stata 9 software. Kaplan-Meier curves were generated and Cox model were fitted to analyze survival. Conclusion: The characteristics of patients and tumors and the one year survival were similar to those observed in Switzerland and some European countries. Patterns of care were close to those recommended in guidelines. Routine data recorded in a cancer registry can be used, not only to provide general statistics, but also to help clinicians assess local practices.
Resumo:
There is a lack of dedicated tools for business model design at a strategic level. However, in today's economic world the need to be able to quickly reinvent a company's business model is essential to stay competitive. This research focused on identifying the functionalities that are necessary in a computer-aided design (CAD) tool for the design of business models in a strategic context. Using design science research methodology a series of techniques and prototypes have been designed and evaluated to offer solutions to the problem. The work is a collection of articles which can be grouped into three parts: First establishing the context of how the Business Model Canvas (BMC) is used to design business models and explore the way in which CAD can contribute to the design activity. The second part extends on this by proposing new technics and tools which support elicitation, evaluation (assessment) and evolution of business models design with CAD. This includes features such as multi-color tagging to easily connect elements, rules to validate coherence of business models and features that are adapted to the correct business model proficiency level of its users. A new way to describe and visualize multiple versions of a business model and thereby help in addressing the business model as a dynamic object was also researched. The third part explores extensions to the business model canvas such as an intermediary model which helps IT alignment by connecting business model and enterprise architecture. And a business model pattern for privacy in a mobile environment, using privacy as a key value proposition. The prototyped techniques and proposition for using CAD tools in business model modeling will allow commercial CAD developers to create tools that are better suited to the needs of practitioners.
Resumo:
RESUME OBJECTIF: Outre la stimulation de la sécrétion d'hormone de croissance, la ghréline cause une prise pondérale par augmentation de l'assimilation d'aliments et réduction de la consommation lipidique. Il a été décrit que les taux de ghréline augmentent durant la phase pré-prandiale et diminuent juste après un repas, ceci suggérant qu'elle puisse jouer un rôle d'initiateur de la prise du repas. Cependant, la sécrétion de ghréline chez des sujets à jeun n'a pas encore été étudiée en détail. DESSIN: Les profils de sécrétion de ghréline pendant 24 heures ont été étudiés chez six sujets volontaires sains (3 femmes, 3 hommes; 25.5 ans; BMI 22.8 kg/m2) et comparés aux profils plasmatiques de l'hormone de croissance, de l'insuline et du glucose. METHODE: Des échantillons sanguins ont été prélevés toutes les 20 minutes pendant 24 heures et les taux de ghréline ont été mesurés par radio-immuno essai, utilisant un anticorps polyclonal de lapin. Le profil circadien de la sécrétion de ghréline (cluster analysis) a été évalué. RESULTATS: Une augmentation puis une diminution spontanée des taux de ghréline ont été observées aux moments où les sujets auraient habituellement mangé. La ghréline a été sécrétée de façon pulsatile avec approximativement 8 pics par 24 heures. Une diminution générale des taux de ghréline a également été observée durant la période d'étude. Aucune corrélation n'a pu être observée entre les taux de ghréline, d'homione de croissance, d'insuline et de glucose. CONCLUSIONS: Cette étude montre que pendant une période de jeûne les taux de ghréline suivent un profil similaire à ceux décrits chez des sujets mangeant 3 fois par jour. Durant le jeûne, l'hormone de croissance, l'insuline et le glucose ne semblent pas être impliqués dans la régulation de la sécrétion de ghréline. En outre, nous avons observé que la sécrétion de ghréline est pulsatile. La variation des taux de ghréline, indépendamment des repas, chez des sujets à jeun, renforce les observations préalables selon lesquelles le système nerveux central est primairement impliqué dans la régulation de la prise alimentaire. ABSTRACT: OBJECTIVE: Ghrelin stimulates GH release and causes weight gain through increased food intake and reduced fat utiIization. Ghrelin levels were shown to rise in the preprandial period and decrease shortly after meal consumption, suggesting a role as a possible meal initiator. However, ghrelin secretion in fasting subjects has not yet been studied in detail. DESIGN: 24-h ghrelin profiles were studied in six healthy volunteers (three females; 25.5 years; body mass index 22.8 kg/m2) and compared with GH, insulin and glucose levels. METHODS: Blood samples were taken every 20 min during a 24-h fasting period and total ghrelin levels were measured by RIA using a polyclonal rabbit antibody. The circadian pattern of ghrelin secretion and pulsatility (Cluster analysis) were evaluated. RESULTS: An increase and spontaneous decrease in ghrelin were seen at the timepoints of customary meals. Ghrelin was secreted in a pulsatile manner with approximately 8 peaks/24 h. An overall decrease in ghrelin levels was observed during the study period. There was no correlation of ghrelin with GH, insulin or blood glucose levels. CONCLUSIONS: This pilot study indicates that fasting ghrelin profiles display a circadian pattern similar to that described in people eating three times per day. In a fasting condition. GH, insulin and glucose do not appear to be involved in ghrelin regulation. In addition, we round that ghrelin is secreted in a pulsatile pattern. The variation in ghrelin independently of meals in fasting subjects supports previous observations that it is the brain that is primarily involved in the regulation of meal initiation.
Resumo:
Context: Ovarian tumors (OT) typing is a competency expected from pathologists, with significant clinical implications. OT however come in numerous different types, some rather rare, with the consequence of few opportunities for practice in some departments. Aim: Our aim was to design a tool for pathologists to train in less common OT typing. Method and Results: Representative slides of 20 less common OT were scanned (Nano Zoomer Digital Hamamatsu®) and the diagnostic algorithm proposed by Young and Scully applied to each case (Young RH and Scully RE, Seminars in Diagnostic Pathology 2001, 18: 161-235) to include: recognition of morphological pattern(s); shortlisting of differential diagnosis; proposition of relevant immunohistochemical markers. The next steps of this project will be: evaluation of the tool in several post-graduate training centers in Europe and Québec; improvement of its design based on evaluation results; diffusion to a larger public. Discussion: In clinical medicine, solving many cases is recognized as of utmost importance for a novice to become an expert. This project relies on the virtual slides technology to provide pathologists with a learning tool aimed at increasing their skills in OT typing. After due evaluation, this model might be extended to other uncommon tumors.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.