957 resultados para Self-Organizing Maps


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atualmente, um dos principais desafios que afeta a saúde pública no Brasil é a crescente evolução no número de casos e epidemias provocados pelo vírus da dengue. Não existem estudos suficientes que consigam elucidar quais fatores contribuem para a evolução das epidemias de Dengue. Fatores como condições sanitárias, localização geográfica, investimentos financeiros em infraestrutura e qualidade de vida podem estar relacionados com a incidência de Dengue. Além disso, outra questão que merece um maior destaque é o estudo para se identificar o grau de impacto das variáveis determinantes da dengue e se existe um padrão que está correlacionado com a taxa de incidência. Desta forma, este trabalho tem como objetivo principal a correlação da taxa de incidência da dengue na população de cada município brasileiro, utilizando dados relativos aos aspectos sociais, econômicos, demográficos e ambientais. Outra contribuição relevante do trabalho, foi a análise dos padrões de distribuição espacial da taxa de incidência de Dengue e sua relação com os padrões encontrados utilizando as variáveis socioeconômicas e ambientais, sobretudo analisando a evolução temporal no período de 2008 até 2012. Para essa análises, utilizou-se o Sistema de Informação Geográfica (SIG) aliado com a mineração de dados, através da metodologia de rede neural mais especificamente o mapa auto organizável de Kohonen ou self-organizing maps (SOM). Tal metodologia foi empregada para a identificação de padrão de agrupamentos dessas variáveis e sua relação com as classes de incidência de dengue no Brasil (Alta, Média e Baixa). Assim, este projeto contribui de forma significativa para uma melhor compreensão dos fatores que estão associados à ocorrência de Dengue, e como essa doença está correlacionada com fatores como: meio ambiente, infraestrutura e localização no espaço geográfico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Geografia Eleitoral definida como a análise da interação entre o espaço, o lugar e os processos eleitorais, compreende fundamentalmente três domínios: padrões de voto, influências geográficas nas eleições e a geografia da representação. A Geografia Eleitoral conta uma longa história, ao ponto de já ter tido um status próprio no âmbito da disciplina. Depois ter aparecido nos anos 70 e 80 com algum vigor no contexto português, esta abordagem dos fenómenos eleitorais tem sido relativamente negligenciada nos anos. Neste trabalho, conjugando as metodologias espaciais-analíticas mais tradicionais com um conjunto de novas tecnologias - como os Sistemas de Informação Geográfica (SIG) e os Self- Organizing Maps (SOM) -, pretendemos dar uma nova ênfase à Geografia Eleitoral nacional, realçando o seu caráter explicativo e abrindo portas a abordagens multidisciplinares dos dados eleitorais. Com base nos resultados das eleições legislativas portuguesas realizadas no período compreendido entre 1991 e 2011, analisamos neste trabalho os seguintes tópicos: a distribuição espacial dos resultados, em conjunto e individualmente, dos cinco partidos com representação parlamentar; a distribuição dos resultados deste conjunto de partidos por região, considerando uma das propostas de divisão administrativa referendada em 1998 e analisando a região da Estremadura e Ribatejo como um estudo de caso; os padrões gerados pela distribuição do bloco constituído pelos dois principais partidos (PS e PPD/PSD); o comportamento espacial dos blocos Direita/Centro-Direita e Esquerda/Centro- Esquerda; a abstenção eleitoral, confrontando os valores registados em cada freguesia com o resultado nacional; a comparação entre diferentes tipos de eleições; a distribuição dos resultados por partido nos dois principais distritos (Lisboa e Porto) que em conjunto representam mais de 40% da população portuguesa; o comportamento do Bloco de Esquerda, o mais jovem dos partidos considerados; e os mapeamentos das freguesias “sociais-democratas” e “socialistas”. Os resultados deste trabalho comprovam de forma geral, de que a georreferenciação dos dados eleitorais nacionais geram uma cartografia que permite confirmar aquilo que outras análises têm vindo a mostrar sobre o comportamento eleitoral dos portugueses. No entanto, existem aspectos específicos da distribuição espacial deste mesmo comportamento eleitoral que permitem aprofundar o conhecimento sobre a interacção entre o espaço e os processos eleitorais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interest in using information to improve the quality of living in large urban areas and its governance efficiency has been around for decades. Nevertheless, the improvements in Information and Communications Technology has sparked a new dynamic in academic research, usually under the umbrella term of Smart Cities. This concept of Smart City can probably be translated, in a simplified version, into cities that are lived, managed and developed in an information-saturated environment. While it makes perfect sense and we can easily foresee the benefits of such a concept, presently there are still several significant challenges that need to be tackled before we can materialize this vision. In this work we aim at providing a small contribution in this direction, which maximizes the relevancy of the available information resources. One of the most detailed and geographically relevant information resource available, for the study of cities, is the census, more specifically the data available at block level (Subsecção Estatística). In this work, we use Self-Organizing Maps (SOM) and the variant Geo-SOM to explore the block level data from the Portuguese census of Lisbon city, for the years of 2001 and 2011. We focus on gauging change, proposing ways that allow the comparison of the two time periods, which have two different underlying geographical bases. We proceed with the analysis of the data using different SOM variants, aiming at producing a two-fold portrait: one, of the evolution of Lisbon during the first decade of the XXI century, another, of how the census dataset and SOM’s can be used to produce an informational framework for the study of cities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hi ha diversos mètodes d'anàlisi que duen a terme una agrupació global de la sèries de mostres de microarrays, com SelfOrganizing Maps, o que realitzen agrupaments locals tenint en compte només un subconjunt de gens coexpressats, com Biclustering, entre d'altres. En aquest projecte s'ha desenvolupat una aplicació web: el PCOPSamplecl, és una eina que pertany als mètodes d'agrupació (clustering) local, que no busca subconjunts de gens coexpresats (anàlisi de relacions linials), si no parelles de gens que davant canvis fenotípics, la seva relació d'expressió pateix fluctuacions. El resultats del PCOPSamplecl seràn les diferents distribucions finals de clusters i les parelles de gens involucrades en aquests canvis fenotípics. Aquestes parelles de gens podràn ser estudiades per trobar la causa i efecte del canvi fenotípic. A més, l'eina facilita l'estudi de les dependències entre les diferents distribucions de clusters que proporciona l'aplicació per poder estudiar la intersecció entre clusters o l'aparició de subclusters (2 clusters d'una mateixa agrupació de clusters poden ser subclusters d'altres clusters de diferents distribucions de clusters). L'eina és disponible al servidor: http://revolutionresearch.uab.es/

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The coverage and volume of geo-referenced datasets are extensive and incessantly¦growing. The systematic capture of geo-referenced information generates large volumes¦of spatio-temporal data to be analyzed. Clustering and visualization play a key¦role in the exploratory data analysis and the extraction of knowledge embedded in¦these data. However, new challenges in visualization and clustering are posed when¦dealing with the special characteristics of this data. For instance, its complex structures,¦large quantity of samples, variables involved in a temporal context, high dimensionality¦and large variability in cluster shapes.¦The central aim of my thesis is to propose new algorithms and methodologies for¦clustering and visualization, in order to assist the knowledge extraction from spatiotemporal¦geo-referenced data, thus improving making decision processes.¦I present two original algorithms, one for clustering: the Fuzzy Growing Hierarchical¦Self-Organizing Networks (FGHSON), and the second for exploratory visual data analysis:¦the Tree-structured Self-organizing Maps Component Planes. In addition, I present¦methodologies that combined with FGHSON and the Tree-structured SOM Component¦Planes allow the integration of space and time seamlessly and simultaneously in¦order to extract knowledge embedded in a temporal context.¦The originality of the FGHSON lies in its capability to reflect the underlying structure¦of a dataset in a hierarchical fuzzy way. A hierarchical fuzzy representation of¦clusters is crucial when data include complex structures with large variability of cluster¦shapes, variances, densities and number of clusters. The most important characteristics¦of the FGHSON include: (1) It does not require an a-priori setup of the number¦of clusters. (2) The algorithm executes several self-organizing processes in parallel.¦Hence, when dealing with large datasets the processes can be distributed reducing the¦computational cost. (3) Only three parameters are necessary to set up the algorithm.¦In the case of the Tree-structured SOM Component Planes, the novelty of this algorithm¦lies in its ability to create a structure that allows the visual exploratory data analysis¦of large high-dimensional datasets. This algorithm creates a hierarchical structure¦of Self-Organizing Map Component Planes, arranging similar variables' projections in¦the same branches of the tree. Hence, similarities on variables' behavior can be easily¦detected (e.g. local correlations, maximal and minimal values and outliers).¦Both FGHSON and the Tree-structured SOM Component Planes were applied in¦several agroecological problems proving to be very efficient in the exploratory analysis¦and clustering of spatio-temporal datasets.¦In this thesis I also tested three soft competitive learning algorithms. Two of them¦well-known non supervised soft competitive algorithms, namely the Self-Organizing¦Maps (SOMs) and the Growing Hierarchical Self-Organizing Maps (GHSOMs); and the¦third was our original contribution, the FGHSON. Although the algorithms presented¦here have been used in several areas, to my knowledge there is not any work applying¦and comparing the performance of those techniques when dealing with spatiotemporal¦geospatial data, as it is presented in this thesis.¦I propose original methodologies to explore spatio-temporal geo-referenced datasets¦through time. Our approach uses time windows to capture temporal similarities and¦variations by using the FGHSON clustering algorithm. The developed methodologies¦are used in two case studies. In the first, the objective was to find similar agroecozones¦through time and in the second one it was to find similar environmental patterns¦shifted in time.¦Several results presented in this thesis have led to new contributions to agroecological¦knowledge, for instance, in sugar cane, and blackberry production.¦Finally, in the framework of this thesis we developed several software tools: (1)¦a Matlab toolbox that implements the FGHSON algorithm, and (2) a program called¦BIS (Bio-inspired Identification of Similar agroecozones) an interactive graphical user¦interface tool which integrates the FGHSON algorithm with Google Earth in order to¦show zones with similar agroecological characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have investigated the phenomenon of deprivation in contemporary Switzerland through the adoption of a multidimensional, dynamic approach. By applying Self Organizing Maps (SOM) to a set of 33 non-monetary indicators from the 2009 wave of the Swiss Household Panel (SHP), we identified 13 prototypical forms (or clusters) of well-being, financial vulnerability, psycho-physiological fragility and deprivation within a topological dimensional space. Then new data from the previous waves (2003 to 2008) were classified by the SOM model, making it possible to estimate the weight of the different clusters in time and reconstruct the dynamics of stability and mobility of individuals within the map. Looking at the transition probabilities between year t and year t+1, we observed that the paths of mobility which catalyze the largest number of observations are those connecting clusters that are adjacent on the topological space.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CRM on yritysten tietojärjestelmä, jolla voidaan tukea asiakkuuden hallintaa ja kehittämistä. Monilla suuryrityksillä on paljon asiakkaita ja niiden on mahdotonta tunnistaa asiakkaitaan yksilöinä. Kuitenkin asiakkaat arvostavat yhä enemmän henkilökohtaista palvelua ja kontakteja. Yritysten asiakastietokantoihin kertyy runsaasti tietoa asiakkaista ja heidän ostokäyttäytymisestään. Suuresta informaatiomäärästä johtuen tarvitaan kehittynyttä tietotekniikkaa asiakkaiden tyypittelyyn ja asiakastarpeiden tunnistamiseen. Tässä diplomityössä kehitetään neuroCRM-teoriaa määrällisesti suuren ja monimutkaisen asiakasinformaation hallintaan. Teoria perustuu itseorganisoituvien neuroverkkojen käyttöön asiakasinformaation analysoimiseksi CRM-järjestelmässä. Asiakkaat segmentoidaan ja personoidaan iteratiivisia SOM-analyysejä suorittamalla. Tulosten perusteella kehitetään asiakkaiden yksilöllisyyttä huomioivia markkinointikeinoja ja käytetään uusia kanavia, esimerkiksi mobiilia viestintätekniikkaa asiakkaiden tavoittamiseen. Asiakaskannattavuuden parantamiseksi voi- daan tehdä strategisia valintoja ja päätöksiä markkinoinnin kohdentamista varten.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ett ämne som väckt intresse både inom industrin och forskningen är hantering av kundförhållanden (CRM, eng. Customer Relationship Management), dvs. en kundorienterad affärsstrategi där företagen från att ha varit produktorienterade väljer att bli mera kundcentrerade. Numera kan kundernas beteende och aktiviteter lätt registreras och sparas med hjälp av integrerade affärssystem (ERP, eng. Enterprise Resource Planning) och datalager (DW, eng. Data Warehousing). Kunder med olika preferenser och köpbeteende skapar sin egen ”signatur” i synnerhet via användningen av kundkort, vilket möjliggör mångsidig modellering av kundernas köpbeteende. För att få en översikt av kundernas köpbeteende och deras lönsamhet, används ofta kundsegmentering som en metod för att indela kunderna i grupper utgående från deras likheter. De mest använda metoderna för kundsegmentering är analytiska modeller konstruerade för en viss tidsperiod. Dessa modeller beaktar inte att kundernas beteende kan förändras med tiden. I föreliggande avhandling skapas en holistisk översikt av kundernas karaktär och köpbeteende som utöver de konventionella segmenteringsmodellerna även beaktar dynamiken i köpbeteendet. Dynamiken i en kundsegmenteringsmodell innefattar förändringar i segmentens struktur och innehåll, samt förändringen av individuella kunders tillhörighet i ett segment (s.k migrationsanalyser). Vardera förändringen modelleras, analyseras och exemplifieras med visuella datautvinningstekniker, främst med självorganiserande kartor (SOM, eng. Self-Organizing Maps) och självorganiserande tidskartor (SOTM), en vidareutveckling av SOM. Visualiseringen anteciperas underlätta tolkningen av identifierade mönster och göra processen med kunskapsöverföring mellan den som gör analysen och beslutsfattaren smidigare. Asiakkuudenhallinta (CRM) eli organisaation muuttaminen tuotepainotteisesta asiakaskeskeiseksi on herättänyt mielenkiintoa niin yliopisto- kuin yritysmaailmassakin. Asiakkaiden käyttäytymistä ja toimintaa pystytään nykyään helposti tallentamaan ja varastoimaan toiminnanohjausjärjestelmien ja tietovarastojen avulla; asiakkaat jättävät jatkuvasti piirteistään ja ostokäyttäytymisestään kertovia tietojälkiä, joita voidaan analysoida. On tavallista, että asiakkaat poikkeavat toisistaan eri tavoin, ja heidän mieltymyksensä kuten myös ostokäyttäytymisensä saattavat olla hyvinkin erilaisia. Asiakaskäyttäytymisen monimuotoisuuteen ja tuottavuuteen paneuduttaessa käytetäänkin laajalti asiakassegmentointia eli asiakkaiden jakamista ryhmiin samankaltaisuuden perusteella. Perinteiset asiakassegmentoinnin ratkaisut ovat usein yksittäisiä analyyttisia malleja, jotka on tehty tietyn aikajakson perusteella. Tämän vuoksi ne monesti jättävät huomioimatta sen, että asiakkaiden käyttäytyminen saattaa ajan kuluessa muuttua. Tässä väitöskirjassa pyritäänkin tarjoamaan holistinen kuva asiakkaiden ominaisuuksista ja ostokäyttäytymisestä tarkastelemalla kahta muutosvoimaa tiettyyn aikarajaukseen perustuvien perinteisten segmentointimallien lisäksi. Nämä kaksi asiakassegmentointimallin dynamiikkaa ovat muutokset segmenttien rakenteessa ja muutokset yksittäisten asiakkaiden kuulumisessa ryhmään. Ensimmäistä dynamiikkaa lähestytään ajallisen asiakassegmentoinnin avulla, jossa visualisoidaan ajan kuluessa tapahtuvat muutokset segmenttien rakenteissa ja profiileissa. Toista dynamiikkaa taas lähestytään käyttäen nk. segmenttisiirtymien analyysia, jossa visuaalisin keinoin tunnistetaan samantyyppisesti segmentistä toiseen vaihtavat asiakkaat. Visualisoinnin tehtävänä on tukea havaittujen kaavojen tulkitsemista sekä helpottaa tiedonsiirtoa analysoijan ja päättäjien välillä. Visuaalisia tiedonlouhintamenetelmiä, kuten itseorganisoivia karttoja ja niiden laajennuksia, käytetään osoittamaan näiden menetelmien hyödyllisyys sekä asiakkuudenhallinnassa yleisesti että erityisesti asiakassegmentoinnissa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La scoliose idiopathique de l’adolescent (SIA) est une déformation tri-dimensionelle du rachis. Son traitement comprend l’observation, l’utilisation de corsets pour limiter sa progression ou la chirurgie pour corriger la déformation squelettique et cesser sa progression. Le traitement chirurgical reste controversé au niveau des indications, mais aussi de la chirurgie à entreprendre. Malgré la présence de classifications pour guider le traitement de la SIA, une variabilité dans la stratégie opératoire intra et inter-observateur a été décrite dans la littérature. Cette variabilité s’accentue d’autant plus avec l’évolution des techniques chirurgicales et de l’instrumentation disponible. L’avancement de la technologie et son intégration dans le milieu médical a mené à l’utilisation d’algorithmes d’intelligence artificielle informatiques pour aider la classification et l’évaluation tridimensionnelle de la scoliose. Certains algorithmes ont démontré être efficace pour diminuer la variabilité dans la classification de la scoliose et pour guider le traitement. L’objectif général de cette thèse est de développer une application utilisant des outils d’intelligence artificielle pour intégrer les données d’un nouveau patient et les évidences disponibles dans la littérature pour guider le traitement chirurgical de la SIA. Pour cela une revue de la littérature sur les applications existantes dans l’évaluation de la SIA fut entreprise pour rassembler les éléments qui permettraient la mise en place d’une application efficace et acceptée dans le milieu clinique. Cette revue de la littérature nous a permis de réaliser que l’existence de “black box” dans les applications développées est une limitation pour l’intégration clinique ou la justification basée sur les évidence est essentielle. Dans une première étude nous avons développé un arbre décisionnel de classification de la scoliose idiopathique basé sur la classification de Lenke qui est la plus communément utilisée de nos jours mais a été critiquée pour sa complexité et la variabilité inter et intra-observateur. Cet arbre décisionnel a démontré qu’il permet d’augmenter la précision de classification proportionnellement au temps passé à classifier et ce indépendamment du niveau de connaissance sur la SIA. Dans une deuxième étude, un algorithme de stratégies chirurgicales basé sur des règles extraites de la littérature a été développé pour guider les chirurgiens dans la sélection de l’approche et les niveaux de fusion pour la SIA. Lorsque cet algorithme est appliqué à une large base de donnée de 1556 cas de SIA, il est capable de proposer une stratégie opératoire similaire à celle d’un chirurgien expert dans prêt de 70% des cas. Cette étude a confirmé la possibilité d’extraire des stratégies opératoires valides à l’aide d’un arbre décisionnel utilisant des règles extraites de la littérature. Dans une troisième étude, la classification de 1776 patients avec la SIA à l’aide d’une carte de Kohonen, un type de réseaux de neurone a permis de démontrer qu’il existe des scoliose typiques (scoliose à courbes uniques ou double thoracique) pour lesquelles la variabilité dans le traitement chirurgical varie peu des recommandations par la classification de Lenke tandis que les scolioses a courbes multiples ou tangentielles à deux groupes de courbes typiques étaient celles avec le plus de variation dans la stratégie opératoire. Finalement, une plateforme logicielle a été développée intégrant chacune des études ci-dessus. Cette interface logicielle permet l’entrée de données radiologiques pour un patient scoliotique, classifie la SIA à l’aide de l’arbre décisionnel de classification et suggère une approche chirurgicale basée sur l’arbre décisionnel de stratégies opératoires. Une analyse de la correction post-opératoire obtenue démontre une tendance, bien que non-statistiquement significative, à une meilleure balance chez les patients opérés suivant la stratégie recommandée par la plateforme logicielle que ceux aillant un traitement différent. Les études exposées dans cette thèse soulignent que l’utilisation d’algorithmes d’intelligence artificielle dans la classification et l’élaboration de stratégies opératoires de la SIA peuvent être intégrées dans une plateforme logicielle et pourraient assister les chirurgiens dans leur planification préopératoire.