951 resultados para Self-organizing networks


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Työssä esitellään käytetyimpiä tuotantofilosofioita. Tuotantofilosofia on hyvin laaja käsite ja sen vuoksi myös jotkin esiteltävistä menetelmistä ovat hyvin kaukana toisistaan. Työ koostuu teoriaosiosta, jossa on esitelty kukin tuotantofilosofia ja lopuksi johtopäätöksiä-osiossa käsitellään sitä, kuinka menetelmät liittyvät toisiinsa. Työssä esitellään JIT/JOT-tuotanto, Lean-tuotanto, Monozukuri, Modulointi, Standardointi, Strategiatyö, Six Sigma, TQM, TPM, QFD, MFD, Simulointi, Digitaalinen valmistus, DFX ja ns. uudet tuotantofilosofiat. Eri menetelmistä löytyvää lähdemateriaalia on tarjolla monipuolisesti, josta johtuen menetelmistä on voitu esitellä vain pääpiirteet. Tuotantofilosofioiden avulla voidaan saavuttaa monia eri asioita. Osa menetelmistä on luotu tuotannon tehostamiseksi ja yksinkertaistamiseksi, osa puolestaan lisää tuotannon tai koko yrityksen laatutasoa ja osa puolestaan helpottaa tuotteiden suunnittelu-työtä. Moni esiteltävistä filosofioista ei istu yksinomaan yhteen edellä mainituista kategorioista vaan kattaa laajempia alueita pitäen sisällään jopa kaikkia kolmea mainittua tulosta. Näiden lisäksi työssä on esitelty lyhyesti uusia tuotantofilosofioita, jotka ovat hieman irrallisia kokonaisuuksia verrattuna muihin työssä esiteltäviin filosofioihin. Työn tarkoituksena on auttaa hahmottamaan suurta kokonaisuutta jonka tuotantofilosofiat tuottavat. On tärkeää osata hahmottaa filosofioiden riippuvuus toisistaan ja se, että otettaessa käyttöön jotain tuotantofilosofiaa, tarkoittaa se myös mahdollisesti monen muunkin asian huomioonottamista. Tätä näkökantaa selvennetään johtopäätöksissä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visual data mining (VDM) tools employ information visualization techniques in order to represent large amounts of high-dimensional data graphically and to involve the user in exploring data at different levels of detail. The users are looking for outliers, patterns and models – in the form of clusters, classes, trends, and relationships – in different categories of data, i.e., financial, business information, etc. The focus of this thesis is the evaluation of multidimensional visualization techniques, especially from the business user’s perspective. We address three research problems. The first problem is the evaluation of projection-based visualizations with respect to their effectiveness in preserving the original distances between data points and the clustering structure of the data. In this respect, we propose the use of existing clustering validity measures. We illustrate their usefulness in evaluating five visualization techniques: Principal Components Analysis (PCA), Sammon’s Mapping, Self-Organizing Map (SOM), Radial Coordinate Visualization and Star Coordinates. The second problem is concerned with evaluating different visualization techniques as to their effectiveness in visual data mining of business data. For this purpose, we propose an inquiry evaluation technique and conduct the evaluation of nine visualization techniques. The visualizations under evaluation are Multiple Line Graphs, Permutation Matrix, Survey Plot, Scatter Plot Matrix, Parallel Coordinates, Treemap, PCA, Sammon’s Mapping and the SOM. The third problem is the evaluation of quality of use of VDM tools. We provide a conceptual framework for evaluating the quality of use of VDM tools and apply it to the evaluation of the SOM. In the evaluation, we use an inquiry technique for which we developed a questionnaire based on the proposed framework. The contributions of the thesis consist of three new evaluation techniques and the results obtained by applying these evaluation techniques. The thesis provides a systematic approach to evaluation of various visualization techniques. In this respect, first, we performed and described the evaluations in a systematic way, highlighting the evaluation activities, and their inputs and outputs. Secondly, we integrated the evaluation studies in the broad framework of usability evaluation. The results of the evaluations are intended to help developers and researchers of visualization systems to select appropriate visualization techniques in specific situations. The results of the evaluations also contribute to the understanding of the strengths and limitations of the visualization techniques evaluated and further to the improvement of these techniques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The large and growing number of digital images is making manual image search laborious. Only a fraction of the images contain metadata that can be used to search for a particular type of image. Thus, the main research question of this thesis is whether it is possible to learn visual object categories directly from images. Computers process images as long lists of pixels that do not have a clear connection to high-level semantics which could be used in the image search. There are various methods introduced in the literature to extract low-level image features and also approaches to connect these low-level features with high-level semantics. One of these approaches is called Bag-of-Features which is studied in the thesis. In the Bag-of-Features approach, the images are described using a visual codebook. The codebook is built from the descriptions of the image patches using clustering. The images are described by matching descriptions of image patches with the visual codebook and computing the number of matches for each code. In this thesis, unsupervised visual object categorisation using the Bag-of-Features approach is studied. The goal is to find groups of similar images, e.g., images that contain an object from the same category. The standard Bag-of-Features approach is improved by using spatial information and visual saliency. It was found that the performance of the visual object categorisation can be improved by using spatial information of local features to verify the matches. However, this process is computationally heavy, and thus, the number of images must be limited in the spatial matching, for example, by using the Bag-of-Features method as in this study. Different approaches for saliency detection are studied and a new method based on the Hessian-Affine local feature detector is proposed. The new method achieves comparable results with current state-of-the-art. The visual object categorisation performance was improved by using foreground segmentation based on saliency information, especially when the background could be considered as clutter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ett ämne som väckt intresse både inom industrin och forskningen är hantering av kundförhållanden (CRM, eng. Customer Relationship Management), dvs. en kundorienterad affärsstrategi där företagen från att ha varit produktorienterade väljer att bli mera kundcentrerade. Numera kan kundernas beteende och aktiviteter lätt registreras och sparas med hjälp av integrerade affärssystem (ERP, eng. Enterprise Resource Planning) och datalager (DW, eng. Data Warehousing). Kunder med olika preferenser och köpbeteende skapar sin egen ”signatur” i synnerhet via användningen av kundkort, vilket möjliggör mångsidig modellering av kundernas köpbeteende. För att få en översikt av kundernas köpbeteende och deras lönsamhet, används ofta kundsegmentering som en metod för att indela kunderna i grupper utgående från deras likheter. De mest använda metoderna för kundsegmentering är analytiska modeller konstruerade för en viss tidsperiod. Dessa modeller beaktar inte att kundernas beteende kan förändras med tiden. I föreliggande avhandling skapas en holistisk översikt av kundernas karaktär och köpbeteende som utöver de konventionella segmenteringsmodellerna även beaktar dynamiken i köpbeteendet. Dynamiken i en kundsegmenteringsmodell innefattar förändringar i segmentens struktur och innehåll, samt förändringen av individuella kunders tillhörighet i ett segment (s.k migrationsanalyser). Vardera förändringen modelleras, analyseras och exemplifieras med visuella datautvinningstekniker, främst med självorganiserande kartor (SOM, eng. Self-Organizing Maps) och självorganiserande tidskartor (SOTM), en vidareutveckling av SOM. Visualiseringen anteciperas underlätta tolkningen av identifierade mönster och göra processen med kunskapsöverföring mellan den som gör analysen och beslutsfattaren smidigare. Asiakkuudenhallinta (CRM) eli organisaation muuttaminen tuotepainotteisesta asiakaskeskeiseksi on herättänyt mielenkiintoa niin yliopisto- kuin yritysmaailmassakin. Asiakkaiden käyttäytymistä ja toimintaa pystytään nykyään helposti tallentamaan ja varastoimaan toiminnanohjausjärjestelmien ja tietovarastojen avulla; asiakkaat jättävät jatkuvasti piirteistään ja ostokäyttäytymisestään kertovia tietojälkiä, joita voidaan analysoida. On tavallista, että asiakkaat poikkeavat toisistaan eri tavoin, ja heidän mieltymyksensä kuten myös ostokäyttäytymisensä saattavat olla hyvinkin erilaisia. Asiakaskäyttäytymisen monimuotoisuuteen ja tuottavuuteen paneuduttaessa käytetäänkin laajalti asiakassegmentointia eli asiakkaiden jakamista ryhmiin samankaltaisuuden perusteella. Perinteiset asiakassegmentoinnin ratkaisut ovat usein yksittäisiä analyyttisia malleja, jotka on tehty tietyn aikajakson perusteella. Tämän vuoksi ne monesti jättävät huomioimatta sen, että asiakkaiden käyttäytyminen saattaa ajan kuluessa muuttua. Tässä väitöskirjassa pyritäänkin tarjoamaan holistinen kuva asiakkaiden ominaisuuksista ja ostokäyttäytymisestä tarkastelemalla kahta muutosvoimaa tiettyyn aikarajaukseen perustuvien perinteisten segmentointimallien lisäksi. Nämä kaksi asiakassegmentointimallin dynamiikkaa ovat muutokset segmenttien rakenteessa ja muutokset yksittäisten asiakkaiden kuulumisessa ryhmään. Ensimmäistä dynamiikkaa lähestytään ajallisen asiakassegmentoinnin avulla, jossa visualisoidaan ajan kuluessa tapahtuvat muutokset segmenttien rakenteissa ja profiileissa. Toista dynamiikkaa taas lähestytään käyttäen nk. segmenttisiirtymien analyysia, jossa visuaalisin keinoin tunnistetaan samantyyppisesti segmentistä toiseen vaihtavat asiakkaat. Visualisoinnin tehtävänä on tukea havaittujen kaavojen tulkitsemista sekä helpottaa tiedonsiirtoa analysoijan ja päättäjien välillä. Visuaalisia tiedonlouhintamenetelmiä, kuten itseorganisoivia karttoja ja niiden laajennuksia, käytetään osoittamaan näiden menetelmien hyödyllisyys sekä asiakkuudenhallinnassa yleisesti että erityisesti asiakassegmentoinnissa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies the predictability of market switching and delisting events from OMX First North Nordic multilateral stock exchange by using financial statement information and market information from 2007 to 2012. This study was conducted by using a three stage process. In first stage relevant theoretical framework and initial variable pool were constructed. Then, explanatory analysis of the initial variable pool was done in order to further limit and identify relevant variables. The explanatory analysis was conducted by using self-organizing map methodology. In the third stage, the predictive modeling was carried out with random forests and support vector machine methodologies. It was found that the explanatory analysis was able to identify relevant variables. The results indicate that the market switching and delisting events can be predicted in some extent. The empirical results also support the usability of financial statement and market information in the prediction of market switching and delisting events.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Companies require information in order to gain an improved understanding of their customers. Data concerning customers, their interests and behavior are collected through different loyalty programs. The amount of data stored in company data bases has increased exponentially over the years and become difficult to handle. This research area is the subject of much current interest, not only in academia but also in practice, as is shown by several magazines and blogs that are covering topics on how to get to know your customers, Big Data, information visualization, and data warehousing. In this Ph.D. thesis, the Self-Organizing Map and two extensions of it – the Weighted Self-Organizing Map (WSOM) and the Self-Organizing Time Map (SOTM) – are used as data mining methods for extracting information from large amounts of customer data. The thesis focuses on how data mining methods can be used to model and analyze customer data in order to gain an overview of the customer base, as well as, for analyzing niche-markets. The thesis uses real world customer data to create models for customer profiling. Evaluation of the built models is performed by CRM experts from the retailing industry. The experts considered the information gained with help of the models to be valuable and useful for decision making and for making strategic planning for the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern food systems face complex global challenges such as climate change, resource scarcities, population growth, concentration and globalization. It is not possible to forecast how all these challenges will affect food systems, but futures research methods provide possibilities to enable better understanding of possible futures and that way increases futures awareness. In this thesis, the two-round online Delphi method was utilized to research experts’ opinions about the present and the future resilience of the Finnish food system up to 2050. The first round questionnaire was constructed based on the resilience indicators developed for agroecosystems. Sub-systems in the study were primary production (main focus), food industry, retail and consumption. Based on the results from the first round, the future images were constructed for primary production and food industry sub-sections. The second round asked experts’ opinion about the future images’ probability and desirability. In addition, panarchy scenarios were constructed by using the adaptive cycle and panarchy frameworks. Furthermore, a new approach to general resilience indicators was developed combining “categories” of the social ecological systems (structure, behaviors and governance) and general resilience parameters (tightness of feedbacks, modularity, diversity, the amount of change a system can withstand, capacity of learning and self- organizing behavior). The results indicate that there are strengths in the Finnish food system for building resilience. According to experts organic farms and larger farms are perceived as socially self-organized, which can promote innovations and new experimentations for adaptation to changing circumstances. In addition, organic farms are currently seen as the most ecologically self-regulated farms. There are also weaknesses in the Finnish food system restricting resilience building. It is important to reach optimal redundancy, in which efficiency and resilience are in balance. In the whole food system, retail sector will probably face the most dramatic changes in the future, especially, when panarchy scenarios and the future images are reflected. The profitability of farms is and will be a critical cornerstone of the overall resilience in primary production. All in all, the food system experts have very positive views concerning the resilience development of the Finnish food system in the future. Sometimes small and local is beautiful, sometimes large and international is more resilient. However, when probabilities and desirability of the future images were questioned, there were significant deviations. It appears that experts do not always believe desirable futures to materialize.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern food systems face complex global challenges such as climate change, resource scarcities, population growth, concentration and globalization. It is not possible to forecast how all these challenges will affect food systems, but futures research methods provide possibilities to enable better understanding of possible futures and that way increases futures awareness. In this thesis, the two-round online Delphi method was utilized to research experts’ opinions about the present and the future resilience of the Finnish food system up to 2050. The first round questionnaire was constructed based on the resilience indicators developed for agroecosystems. Sub-systems in the study were primary production (main focus), food industry, retail and consumption. Based on the results from the first round, the future images were constructed for primary production and food industry sub-sections. The second round asked experts’ opinion about the future images’ probability and desirability. In addition, panarchy scenarios were constructed by using the adaptive cycle and panarchy frameworks. Furthermore, a new approach to general resilience indicators was developed combining “categories” of the social ecological systems (structure, behaviors and governance) and general resilience parameters (tightness of feedbacks, modularity, diversity, the amount of change a system can withstand, capacity of learning and self- organizing behavior). The results indicate that there are strengths in the Finnish food system for building resilience. According to experts organic farms and larger farms are perceived as socially self-organized, which can promote innovations and new experimentations for adaptation to changing circumstances. In addition, organic farms are currently seen as the most ecologically self-regulated farms. There are also weaknesses in the Finnish food system restricting resilience building. It is important to reach optimal redundancy, in which efficiency and resilience are in balance. In the whole food system, retail sector will probably face the most dramatic changes in the future, especially, when panarchy scenarios and the future images are reflected. The profitability of farms is and will be a critical cornerstone of the overall resilience in primary production. All in all, the food system experts have very positive views concerning the resilience development of the Finnish food system in the future. Sometimes small and local is beautiful, sometimes large and international is more resilient. However, when probabilities and desirability of the future images were questioned, there were significant deviations. It appears that experts do not always believe desirable futures to materialize.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Euclidean distance matrix analysis (EDMA) methods are used to distinguish whether or not significant difference exists between conformational samples of antibody complementarity determining region (CDR) loops, isolated LI loop and LI in three-loop assembly (LI, L3 and H3) obtained from Monte Carlo simulation. After the significant difference is detected, the specific inter-Ca distance which contributes to the difference is identified using EDMA.The estimated and improved mean forms of the conformational samples of isolated LI loop and LI loop in three-loop assembly, CDR loops of antibody binding site, are described using EDMA and distance geometry (DGEOM). To the best of our knowledge, it is the first time the EDMA methods are used to analyze conformational samples of molecules obtained from Monte Carlo simulations. Therefore, validations of the EDMA methods using both positive control and negative control tests for the conformational samples of isolated LI loop and LI in three-loop assembly must be done. The EDMA-I bootstrap null hypothesis tests showed false positive results for the comparison of six samples of the isolated LI loop and true positive results for comparison of conformational samples of isolated LI loop and LI in three-loop assembly. The bootstrap confidence interval tests revealed true negative results for comparisons of six samples of the isolated LI loop, and false negative results for the conformational comparisons between isolated LI loop and LI in three-loop assembly. Different conformational sample sizes are further explored by combining the samples of isolated LI loop to increase the sample size, or by clustering the sample using self-organizing map (SOM) to narrow the conformational distribution of the samples being comparedmolecular conformations. However, there is no improvement made for both bootstrap null hypothesis and confidence interval tests. These results show that more work is required before EDMA methods can be used reliably as a method for comparison of samples obtained by Monte Carlo simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le sujet général de cette thèse est l’étude de la fonctionnalisation covalente des nanotubes de carbone (CNT) et son application en électronique. Premièrement, une introduction au sujet est présentée. Elle discute des propriétés des CNT, des différentes sortes de fonctionnalisation covalente ainsi que des principales techniques de caractérisation utilisées au cours de la thèse. Deuxièmement, les répercussions de la fonctionnalisation covalente sur les propriétés des nanotubes de carbone monoparoi (SWNT) sont étudiées. Deux types de fonctionnalisation sont regardés, soit le greffage de groupements phényles et le greffage de groupements dichlorométhylènes. Une diminution de l’absorption optique des SWNT dans le domaine du visible-proche infrarouge est observée ainsi qu’une modification de leur spectre Raman. De plus, pour les dérivés phényles, une importante diminution de la conductance des nanotubes est enregistrée. Troisièmement, la réversibilité de ces deux fonctionnalisations est examinée. Il est montré qu’un recuit permet de résorber les modifications structurales et retrouver, en majorité, les propriétés originales des SWNT. La température de défonctionnalisation varie selon le type de greffons, mais ne semble pas affectée par le diamètre des nanotubes (diamètre examinés : dérivés phényles, Ømoyen= 0,81 nm, 0,93 nm et 1,3 nm; dérivés dichlorométhylènes, Ømoyen = 0,81 nm et 0,93 nm). Quatrièmement, la polyvalence et la réversibilité de la fonctionnalisation covalente par des unités phényles sont exploitées afin de développer une méthode d’assemblage de réseaux de SWNT. Celle-ci, basée sur l’établissement de forces électrostatiques entre les greffons des SWNT et le substrat, est à la fois efficace et sélective quant à l’emplacement des SWNT sur le substrat. Son application à la fabrication de dispositifs électroniques est réalisée. Finalement, la fonctionnalisation covalente par des groupements phényles est appliquée aux nanotubes de carbone à double paroi (DWNT). Une étude spectroscopique montre que cette dernière s’effectue exclusivement sur la paroi externe. De plus, il est démontré que la signature électrique des DWNT avant et après la fonctionnalisation par des groupements phényles est caractéristique de l’agencement nanotube interne@ nanotube externe.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Codirection: Dr. Gonzalo Lizarralde

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La scoliose idiopathique de l’adolescent (SIA) est une déformation tri-dimensionelle du rachis. Son traitement comprend l’observation, l’utilisation de corsets pour limiter sa progression ou la chirurgie pour corriger la déformation squelettique et cesser sa progression. Le traitement chirurgical reste controversé au niveau des indications, mais aussi de la chirurgie à entreprendre. Malgré la présence de classifications pour guider le traitement de la SIA, une variabilité dans la stratégie opératoire intra et inter-observateur a été décrite dans la littérature. Cette variabilité s’accentue d’autant plus avec l’évolution des techniques chirurgicales et de l’instrumentation disponible. L’avancement de la technologie et son intégration dans le milieu médical a mené à l’utilisation d’algorithmes d’intelligence artificielle informatiques pour aider la classification et l’évaluation tridimensionnelle de la scoliose. Certains algorithmes ont démontré être efficace pour diminuer la variabilité dans la classification de la scoliose et pour guider le traitement. L’objectif général de cette thèse est de développer une application utilisant des outils d’intelligence artificielle pour intégrer les données d’un nouveau patient et les évidences disponibles dans la littérature pour guider le traitement chirurgical de la SIA. Pour cela une revue de la littérature sur les applications existantes dans l’évaluation de la SIA fut entreprise pour rassembler les éléments qui permettraient la mise en place d’une application efficace et acceptée dans le milieu clinique. Cette revue de la littérature nous a permis de réaliser que l’existence de “black box” dans les applications développées est une limitation pour l’intégration clinique ou la justification basée sur les évidence est essentielle. Dans une première étude nous avons développé un arbre décisionnel de classification de la scoliose idiopathique basé sur la classification de Lenke qui est la plus communément utilisée de nos jours mais a été critiquée pour sa complexité et la variabilité inter et intra-observateur. Cet arbre décisionnel a démontré qu’il permet d’augmenter la précision de classification proportionnellement au temps passé à classifier et ce indépendamment du niveau de connaissance sur la SIA. Dans une deuxième étude, un algorithme de stratégies chirurgicales basé sur des règles extraites de la littérature a été développé pour guider les chirurgiens dans la sélection de l’approche et les niveaux de fusion pour la SIA. Lorsque cet algorithme est appliqué à une large base de donnée de 1556 cas de SIA, il est capable de proposer une stratégie opératoire similaire à celle d’un chirurgien expert dans prêt de 70% des cas. Cette étude a confirmé la possibilité d’extraire des stratégies opératoires valides à l’aide d’un arbre décisionnel utilisant des règles extraites de la littérature. Dans une troisième étude, la classification de 1776 patients avec la SIA à l’aide d’une carte de Kohonen, un type de réseaux de neurone a permis de démontrer qu’il existe des scoliose typiques (scoliose à courbes uniques ou double thoracique) pour lesquelles la variabilité dans le traitement chirurgical varie peu des recommandations par la classification de Lenke tandis que les scolioses a courbes multiples ou tangentielles à deux groupes de courbes typiques étaient celles avec le plus de variation dans la stratégie opératoire. Finalement, une plateforme logicielle a été développée intégrant chacune des études ci-dessus. Cette interface logicielle permet l’entrée de données radiologiques pour un patient scoliotique, classifie la SIA à l’aide de l’arbre décisionnel de classification et suggère une approche chirurgicale basée sur l’arbre décisionnel de stratégies opératoires. Une analyse de la correction post-opératoire obtenue démontre une tendance, bien que non-statistiquement significative, à une meilleure balance chez les patients opérés suivant la stratégie recommandée par la plateforme logicielle que ceux aillant un traitement différent. Les études exposées dans cette thèse soulignent que l’utilisation d’algorithmes d’intelligence artificielle dans la classification et l’élaboration de stratégies opératoires de la SIA peuvent être intégrées dans une plateforme logicielle et pourraient assister les chirurgiens dans leur planification préopératoire.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many examples for emergent behaviors may be observed in self-organizing physical and biological systems which prove to be robust, stable, and adaptable. Such behaviors are often based on very simple mechanisms and rules, but artificially creating them is a challenging task which does not comply with traditional software engineering. In this article, we propose a hybrid approach by combining strategies from Genetic Programming and agent software engineering, and demonstrate that this approach effectively yields an emergent design for given problems.