771 resultados para Fuzzy-Wavelet Neural Network


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part of this research, three stages were stated for a program to increase the information extracted from ink evidence and maximise its usefulness to the criminal and civil justice system. These stages are (a) develop a standard methodology for analysing ink samples by high-performance thin layer chromatography (HPTLC) in reproducible way, when ink samples are analysed at different time, locations and by different examiners; (b) compare automatically and objectively ink samples; and (c) define and evaluate theoretical framework for the use of ink evidence in forensic context. This report focuses on the second of the three stages. Using the calibration and acquisition process described in the previous report, mathematical algorithms are proposed to automatically and objectively compare ink samples. The performances of these algorithms are systematically studied for various chemical and forensic conditions using standard performance tests commonly used in biometrics studies. The results show that different algorithms are best suited for different tasks. Finally, this report demonstrates how modern analytical and computer technology can be used in the field of ink examination and how tools developed and successfully applied in other fields of forensic science can help maximising its impact within the field of questioned documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amygdala is part of a neural network that contributes to the regulation of emotional behaviors. Rodents, especially rats, are used extensively as model organisms to decipher the functions of specific amygdala nuclei, in particular in relation to fear and emotional learning. Analysis of the role of the nonhuman primate amygdala in these functions has lagged work in the rodent but provides evidence for conservation of basic functions across species. Here we provide quantitative information regarding the morphological characteristics of the main amygdala nuclei in rats and monkeys, including neuron and glial cell numbers, neuronal soma size, and individual nuclei volumes. The volumes of the lateral, basal, and accessory basal nuclei were, respectively, 32, 39, and 39 times larger in monkeys than in rats. In contrast, the central and medial nuclei were only 8 and 4 times larger in monkeys than in rats. The numbers of neurons in the lateral, basal, and accessory basal nuclei were 14, 11, and 16 times greater in monkeys than in rats, whereas the numbers of neurons in the central and medial nuclei were only 2.3 and 1.5 times greater in monkeys than in rats. Neuron density was between 2.4 and 3.7 times lower in monkeys than in rats, whereas glial density was only between 1.1 and 1.7 times lower in monkeys than in rats. We compare our data in rats and monkeys with those previously published in humans and discuss the theoretical and functional implications that derive from our quantitative structural findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La sostenibilidad de los recursos marinos y de su ecosistema hace necesario un manejo responsable de las pesquerías. Conocer la distribución espacial del esfuerzo pesquero y en particular de las operaciones de pesca es indispensable para mejorar el monitoreo pesquero y el análisis de la vulnerabilidad de las especies frente a la pesca. Actualmente en la pesquería de anchoveta peruana, se recoge información del esfuerzo y capturas mediante un programa de observadores a bordo, pero esta solo representa una muestra de 2% del total de viajes pesqueros. Por otro lado, se dispone de información por cada hora (en promedio) de la posición de cada barco de la flota gracias al sistema de seguimiento satelital de las embarcaciones (VMS), aunque en estos no se señala cuándo ni dónde ocurrieron las calas. Las redes neuronales artificiales (ANN) podrían ser un método estadístico capaz de inferir esa información, entrenándose en una muestra para la cual sí conocemos las posiciones de calas (el 2% anteriormente referido), estableciendo relaciones analíticas entre las calas y ciertas características geométricas de las trayectorias observadas por el VMS y así, a partir de las últimas, identificar la posición de las operaciones de pesca. La aplicación de la red neuronal requiere un análisis previo que examine la sensibilidad de la red a variaciones en sus parámetros y bases de datos de entrenamiento, y que nos permita desarrollar criterios para definir la estructura de la red e interpretar sus resultados de manera adecuada. La problemática descrita en el párrafo anterior, aplicada específicamente a la anchoveta (Engraulis ringens) es detalllada en el primer capítulo, mientras que en el segundo se hace una revisión teórica de las redes neuronales. Luego se describe el proceso de construcción y pre-tratamiento de la base de datos, y definición de la estructura de la red previa al análisis de sensibilidad. A continuación se presentan los resultados para el análisis en los que obtenemos una estimación del 100% de calas, de las cuales aproximadamente 80% están correctamente ubicadas y 20% poseen un error de ubicación. Finalmente se discuten las fortalezas y debilidades de la técnica empleada, de métodos alternativos potenciales y de las perspectivas abiertas por este trabajo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary The specific CD8+ T cell immune response against tumors relies on the recognition by the T cell receptor (TCR) on cytotoxic T lymphocytes (CTL) of antigenic peptides bound to the class I major histocompatibility complex (MHC) molecule. Such tumor associated antigenic peptides are the focus of tumor immunotherapy with peptide vaccines. The strategy for obtaining an improved immune response often involves the design of modified tumor associated antigenic peptides. Such modifications aim at creating higher affinity and/or degradation resistant peptides and require precise structures of the peptide-MHC class I complex. In addition, the modified peptide must be cross-recognized by CTLs specific for the parental peptide, i.e. preserve the structure of the epitope. Detailed structural information on the modified peptide in complex with MHC is necessary for such predictions. In this thesis, the main focus is the development of theoretical in silico methods for prediction of both structure and cross-reactivity of peptide-MHC class I complexes. Applications of these methods in the context of immunotherapy are also presented. First, a theoretical method for structure prediction of peptide-MHC class I complexes is developed and validated. The approach is based on a molecular dynamics protocol to sample the conformational space of the peptide in its MHC environment. The sampled conformers are evaluated using conformational free energy calculations. The method, which is evaluated for its ability to reproduce 41 X-ray crystallographic structures of different peptide-MHC class I complexes, shows an overall prediction success of 83%. Importantly, in the clinically highly relevant subset of peptide-HLAA*0201 complexes, the prediction success is 100%. Based on these structure predictions, a theoretical approach for prediction of cross-reactivity is developed and validated. This method involves the generation of quantitative structure-activity relationships using three-dimensional molecular descriptors and a genetic neural network. The generated relationships are highly predictive as proved by high cross-validated correlation coefficients (0.78-0.79). Together, the here developed theoretical methods open the door for efficient rational design of improved peptides to be used in immunotherapy. Résumé La réponse immunitaire spécifique contre des tumeurs dépend de la reconnaissance par les récepteurs des cellules T CD8+ de peptides antigéniques présentés par les complexes majeurs d'histocompatibilité (CMH) de classe I. Ces peptides sont utilisés comme cible dans l'immunothérapie par vaccins peptidiques. Afin d'augmenter la réponse immunitaire, les peptides sont modifiés de façon à améliorer l'affinité et/ou la résistance à la dégradation. Ceci nécessite de connaître la structure tridimensionnelle des complexes peptide-CMH. De plus, les peptides modifiés doivent être reconnus par des cellules T spécifiques du peptide natif. La structure de l'épitope doit donc être préservée et des structures détaillées des complexes peptide-CMH sont nécessaires. Dans cette thèse, le thème central est le développement des méthodes computationnelles de prédiction des structures des complexes peptide-CMH classe I et de la reconnaissance croisée. Des applications de ces méthodes de prédiction à l'immunothérapie sont également présentées. Premièrement, une méthode théorique de prédiction des structures des complexes peptide-CMH classe I est développée et validée. Cette méthode est basée sur un échantillonnage de l'espace conformationnel du peptide dans le contexte du récepteur CMH classe I par dynamique moléculaire. Les conformations sont évaluées par leurs énergies libres conformationnelles. La méthode est validée par sa capacité à reproduire 41 structures des complexes peptide-CMH classe I obtenues par cristallographie aux rayons X. Le succès prédictif général est de 83%. Pour le sous-groupe HLA-A*0201 de complexes de grande importance pour l'immunothérapie, ce succès est de 100%. Deuxièmement, à partir de ces structures prédites in silico, une méthode théorique de prédiction de la reconnaissance croisée est développée et validée. Celle-ci consiste à générer des relations structure-activité quantitatives en utilisant des descripteurs moléculaires tridimensionnels et un réseau de neurones couplé à un algorithme génétique. Les relations générées montrent une capacité de prédiction remarquable avec des valeurs de coefficients de corrélation de validation croisée élevées (0.78-0.79). Les méthodes théoriques développées dans le cadre de cette thèse ouvrent la voie du design de vaccins peptidiques améliorés.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

No presente estudo, foi realizada uma avaliação de diferentes variáveis ambientais no mapeamento digital de solos em uma região no norte do Estado de Minas Gerais, utilizando redes neurais artificiais (RNA). Os atributos do terreno declividade e índice topográfico combinado (CTI), derivados de um modelo digital de elevação, três bandas do sensor Quickbird e um mapa de litologia foram combinados, e a importância de cada variável para discriminação das unidades de mapeamento foi avaliada. O simulador de redes neurais utilizado foi o "Java Neural Network Simulator", e o algoritmo de aprendizado, o "backpropagation". Para cada conjunto testado, foi selecionada uma RNA para a predição das unidades de mapeamento; os mapas gerados por esses conjuntos foram comparados com um mapa de solos produzido com o método convencional, para determinação da concordância entre as classificações. Essa comparação mostrou que o mapa produzido com o uso de todas as variáveis ambientais (declividade, índice CTI, bandas 1, 2 e 3 do Quickbird e litologia) obteve desempenho superior (67,4 % de concordância) ao dos mapas produzidos pelos demais conjuntos de variáveis. Das variáveis utilizadas, a declividade foi a que contribuiu com maior peso, pois, quando suprimida da análise, os resultados da concordância foram os mais baixos (33,7 %). Os resultados demonstraram que a abordagem utilizada pode contribuir para superar alguns dos problemas do mapeamento de solos no Brasil, especialmente em escalas maiores que 1:25.000, tornando sua execução mais rápida e mais barata, sobretudo se houver disponibilidade de dados de sensores remotos de alta resolução espacial a custos mais baixos e facilidade de obtenção dos atributos do terreno nos sistemas de informação geográfica (SIG).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research deals with an application of artificial neural networks for multitask learning from spatial environmental data. The real case study (sediments contamination of Geneva Lake) consists of 8 pollutants. There are different relationships between these variables, from linear correlations to strong nonlinear dependencies. The main idea is to construct a subsets of pollutants which can be efficiently modeled together within the multitask framework. The proposed two-step approach is based on: 1) the criterion of nonlinear predictability of each variable ?k? by analyzing all possible models composed from the rest of the variables by using a General Regression Neural Network (GRNN) as a model; 2) a multitask learning of the best model using multilayer perceptron and spatial predictions. The results of the study are analyzed using both machine learning and geostatistical tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate the existing equations that might be used to predict Iowa pavement concrete for the Level 2 pavement design. In this project, over 20,000 data were collected from the Iowa Department of Transportation (DOT) and other sources. These data, most of which were concrete compressive strength, slump, air content, and unit weight data, were synthesized and their statistical parameters (such as the mean values and standard variations) were analyzed. Based on the analyses, the typical input values of Iowa pavement concrete, such as 28-day compressive strength (f’c), splitting tensile strength (fsp), elastic modulus (Ec), and modulus of rupture (MOR), were evaluated. The study indicates that the 28-day MOR of Iowa concrete is 646 + 51 psi, very close to the MEPDG default value (650 psi). The 28-day Ec of Iowa concrete (based only on two available data of the Iowa Curling and Warping project) is 4.82 + 0.28x106 psi, which is quite different from the MEPDG default value (3.93 x106 psi); therefore, the researchers recommend re-evaluating after more Iowa test data become available. The drying shrinkage (εc) of a typical Iowa concrete (C-3WR-C20 mix) was tested at Concrete Technology Laboratory (CTL). The test results show that the ultimate shrinkage of the concrete is about 454 microstrain and the time for the concrete to reach 50% of ultimate shrinkage is at 32 days; both of these values are very close to the MEPDG default values. The comparison of the Iowa test data and the MEPDG default values, as well as the recommendations on the input values to be used in MEPDG for Iowa PCC pavement design, are summarized in Table 20 of this report. The available equations for predicting the above-mentioned concrete properties were also assembled. The validity of these equations for Iowa concrete materials was examined. Multiple-parameters nonlinear regression analyses, along with the artificial neural network (ANN) method, were employed to investigate the relationships among Iowa concrete material properties and to modify the existing equations so as to be suitable for Iowa concrete materials. However, due to lack of necessary data sets, the relationships between Iowa concrete properties were established based on the limited data from CP Tech Center’s projects and ISU classes only. The researchers suggest that the resulting relationships be used by Iowa pavement design engineers as references only. The present study furthermore indicates that appropriately documenting concrete properties, including flexural strength, elastic modulus, and information on concrete mix design, is essential for updating the typical Iowa material input values and providing rational prediction equations for concrete pavement design in the future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ski resorts are deploying more and more systems of artificial snow. These tools are necessary to ensure an important economic activity for the high alpine valleys. However, artificial snow raises important environmental issues that can be reduced by an optimization of its production. This paper presents a software prototype based on artificial intelligence to help ski resorts better manage their snowpack. It combines on one hand a General Neural Network for the analysis of the snow cover and the spatial prediction, with on the other hand a multiagent simulation of skiers for the analysis of the spatial impact of ski practice. The prototype has been tested on the ski resort of Verbier (Switzerland).