940 resultados para Finite elements methods, Radial basis function, Interpolation, Virtual leaf, Clough-Tocher method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes a simulation tool being developed at UPC to predict the microwave nonlinear behavior of planar superconducting structures with very few restrictions on the geometry of the planar layout. The software is intended to be applicable to most structures used in planar HTS circuits, including line, patch, and quasi-lumped microstrip resonators. The tool combines Method of Moments (MoM) algorithms for general electromagnetic simulation with Harmonic Balance algorithms to take into account the nonlinearities in the HTS material. The Method of Moments code is based on discretization of the Electric Field Integral Equation in Rao, Wilton and Glisson Basis Functions. The multilayer dyadic Green's function is used with Sommerfeld integral formulation. The Harmonic Balance algorithm has been adapted to this application where the nonlinearity is distributed and where compatibility with the MoM algorithm is required. Tests of the algorithm in TM010 disk resonators agree with closed-form equations for both the fundamental and third-order intermodulation currents. Simulations of hairpin resonators show good qualitative agreement with previously published results, but it is found that a finer meshing would be necessary to get correct quantitative results. Possible improvements are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityössä tehdään jatkokehitystä KCI Konecranes yrityksen siltanosturin laskentaohjelmaan. Ohjelman tärkeimmät jatkokehityskohteet kartoitettiin käyttäjäkyselyn avulla ja niistä valittiin toivotuimmat, sekä diplomityön lujuusopilliseen aihepiiriin parhaiten soveltuvat. Työhön valitut kaksi aihetta ovat koteloprofiilin kaksiosaisen uuman lujuuslaskennan selvittäminen ja siltanosturin kahdeksanpyöräisenpäätykannattajan elementtimallin suunnittelu. Diplomityössä selvitetään jatkokehityskohteisiin liittyvä teoria, mutta varsinainen ohjelmointi jätetään työn ulkopuolelle. Kaksiosaisella uumalla varustetussa koteloprofiilissa nostovaunun kulkukiskon alla olevan uuman yläosa tehdään paksummaksi, jotta uuma kestäisi nostovaunun pyöräkuormasta aiheutuvan paikallisen jännityksen, eliniin sanotun rusennusjännityksen. Rusennusjännityksen määrittäminen uumalevyissä on kaksiosaisen uuman lujuuslaskennan tärkein tehtävä. Rusennuksen aiheuttamankalvojännityksen ja jännityskeskittymien määrittämiseen erilaisissa konstruktioissa etsittiin sopivimmat menetelmät kirjallisuudesta ja standardeista. Kalvojännitys voidaan määrittää luotettavasti käyttäen joko 45 asteen sääntöä tai standardin mukaista menetelmää ja jännityskonsentraatioiden suuruus saadaan kertomallakalvojännitys jännityskonsentraatiokertoimilla. Menetelmien toimivuus verifioitiin tekemällä kymmeniä uuman elementtimalleja erilaisin dimensioin ja reunaehdoin ja vertaamalla elementtimallien tuloksia käsin laskettuihin. Käsin lasketut jännitykset saatiin vastaamaan tarkasti elementtimallien tuloksia. Kaksiosaisen uuman lommahdus- ja väsymislaskentaa tutkittiin alustavasti. Kahdeksanpyöräisiä päätykannattajia käytetään suurissa siltanostureissa pienentämään pyöräkuormia ja radan rusennusjännityksiä. Kahdeksanpyöräiselle siltanosturin päätykannattajalle suunniteltiin elementtimallit molempiin rakenteesta käytettyihin konstruktioihin: nivelöityyn ja jäykkäkehäiseen malliin. Elementtimallien rakentamisessa hyödynnettiin jo olemassa olevia malleja, jolloin niiden lisääminen ohjelmakoodiin nopeutuu ja ne ovat varmasti yhteensopivia muiden laskentamoduuleiden kanssa. Elementtimallien värähtelyanalyysin reunaehtoja tarkasteltiin. Värähtelyanalyysin reunaehtoihin ei tutkimuksen perusteella tarvitse tehdä muutoksia, mutta staattisen analyysin reunaehdot kaipaavat vielä lisätutkimusta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a finite element approximation of a system of partial differential equations describing the coupling between the propagation of electrical potential and large deformations of the cardiac tissue. The underlying mathematical model is based on the active strain assumption, in which it is assumed that a multiplicative decomposition of the deformation tensor into a passive and active part holds, the latter carrying the information of the electrical potential propagation and anisotropy of the cardiac tissue into the equations of either incompressible or compressible nonlinear elasticity, governing the mechanical response of the biological material. In addition, by changing from an Eulerian to a Lagrangian configuration, the bidomain or monodomain equations modeling the evolution of the electrical propagation exhibit a nonlinear diffusion term. Piecewise quadratic finite elements are employed to approximate the displacements field, whereas for pressure, electrical potentials and ionic variables are approximated by piecewise linear elements. Various numerical tests performed with a parallel finite element code illustrate that the proposed model can capture some important features of the electromechanical coupling, and show that our numerical scheme is efficient and accurate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neural Networks are a set of mathematical methods and computer programs designed to simulate the information process and the knowledge acquisition of the human brain. In last years its application in chemistry is increasing significantly, due the special characteristics for model complex systems. The basic principles of two types of neural networks, the multi-layer perceptrons and radial basis functions, are introduced, as well as, a pruning approach to architecture optimization. Two analytical applications based on near infrared spectroscopy are presented, the first one for determination of nitrogen content in wheat leaves using multi-layer perceptrons networks and second one for determination of BRIX in sugar cane juices using radial basis functions networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the work is to study the existing analytical calculation procedures found in literature to calculate the eddy-current losses in surface mounted permanent magnets within PMSM application. The most promising algorithms are implemented with MATLAB software under the dimensional data of LUT prototype machine. In addition finite elements analyze, utilized with help of Flux 2D software from Cedrat Ltd, is applied to calculate the eddy-current losses in permanent magnets. The results obtained from analytical methods are compared with numerical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a methodology to determine the parameters used in the simulation of delamination in composite materials using decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is proposed. The procedure ensures that the energy dissipated by the fracture process is correctly computed. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used in the simulation of fracture processes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-speed and high-voltage solid-rotor induction machine provides beneficial features for natural gas compressor technology. The mechanical robustness of the machine enables its use in an integrated motor-compressor. The technology uses a centrifugal compressor, which is mounted on the same shaft with the high-speed electrical machine driving it. No gearbox is needed as the speed is determined by the frequency converter. The cooling is provided by the process gas, which flows through the motor and is capable of transferring the heat away from the motor. The technology has been used in the compressors in the natural gas supply chain in the central Europe. New areas of application include natural gas compressors working at the wellheads of the subsea gas reservoir. A key challenge for the design of such a motor is the resistance of the stator insulation to the raw natural gas from the well. The gas contains water and heavy hydrocarbon compounds and it is far harsher than the sales gas in the natural gas supply network. The objective of this doctoral thesis is to discuss the resistance of the insulation to the raw natural gas and the phenomena degrading the insulation. The presence of partial discharges is analyzed in this doctoral dissertation. The breakdown voltage of the gas is measured as a function of pressure and gap distance. The partial discharge activity is measured on small samples representing the windings of the machine. The electrical field behavior is also modeled by finite element methods. Based on the measurements it has been concluded that the discharges are expected to disappear at gas pressures above 4 – 5 bar. The disappearance of discharges is caused by the breakdown strength of the gas, which increases as the pressure increases. Based on the finite element analysis, the physical length of a discharge seen in the PD measurements at atmospheric pressure was approximated to be 40 – 120 m. The chemical aging of the insulation when exposed to raw natural gas is discussed based on a vast set of experimental tests with the gas mixture representing the real gas mixture at the wellhead. The mixture was created by mixing dry hydrocarbon gas, heavy hydrocarbon compounds, monoethylene glycol, and water. The mixture was chosen to be more aggressive by increasing the amount of liquid substances. Furthermore, the temperature and pressure were increased, which resulted in accelerated test conditions. The time required to detect severe degradation was thus decreased. The test program included a comparison of materials, an analysis of the e ects of di erent compounds in the gas mixture, namely water and heavy hydrocarbons, on the aging, an analysis of the e ects of temperature and exposure duration, and also an analysis on the e ect of sudden pressure changes on the degradation of the insulating materials. It was found in the tests that an insulation consisting of mica, glass, and epoxy resin can tolerate the raw natural gas, but it experiences some degradation. The key material in the composite insulation is the resin, which largely defines the performance of the insulation system. The degradation of the insulation is mostly determined by the amount of gas mixture di used into it. The di usion was seen to follow Fick’s second law, but the coe cients were not accurately defined. The di usion was not sensitive to temperature, but it was dependent upon the thermodynamic state of the gas mixture, in other words, the amounts of liquid components in the gas. The weight increase observed was mostly related to heavy hydrocarbon compounds, which act as plasticizers in the epoxy resin. The di usion of these compounds is determined by the crosslink density of the resin. Water causes slight changes in the chemical structure, but these changes do not significantly contribute to the aging phenomena. Sudden changes in pressure can lead to severe damages in the insulation, because the motion of the di used gas is able to create internal cracks in the insulation. Therefore, the di usion only reduces the mechanical strength of the insulation, but the ultimate breakdown can potentially be caused by a sudden drop in the pressure of the process gas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tarkoituksena oli tutkia kirjallisuudessa esitettyjen menetelmien soveltuvuutta estää kalsiumkarbonaatin saostuminen PCC – prosessissa paperikonelinjan massansyöttöputkessa. Menetelmien heikkoudet ja toimivuudet arvioitiin lähdetietojen perusteella, joiden perusteella tehtiin johtopäätelmiä niiden toimivuudesta prosessissa ja käytettävyydestä kalsiumkarbonaattisaostuman muodostumiseen. Kirjallisuusselvitys osoittaa, että saostuma voidaan estää kemiallisesti ja sähkökemiallisesti sekä modifioimalla materiaalipintaa. Lisäksi sen muodostuminen voidaan estää ultraääniaaltojen avulla sekä indusoimalla mahdolliseen saostuskohtaan magneetti- tai sähkökenttä. Kokeellisessa osassa keskityttiin kirjallisuuden perusteella tutkimaan kalsiumkarbonaatin saostumista kaupallisilla menetelmillä. Tutkimuksissa käytettiin erilaisia pinnoitemateriaaleja, ultraääniaaltoja sekä magneettikenttää että liuossähkökemiaa. Näiden lisäksi tutkittiin saostumisen kehittymistä putkipinnalle ajan funktiona. Kokeellisten tulosten perusteella kirjallisuudessa esitetyistä menetelmistä ei suoraan mikään sovellu sellaisenaan tutkitussa prosessissa saostuman muodostumisen estoon. Koeparametrien optimointi muuttamalla tunnettuja parametreja prosessitilanteeseen paransi kalsiumkarbonaatin liukoisuutta. Tämän tutkimuksen tuloksena löytyi menetelmäkuvaus, jolla saostumien muodostuminen putkipinnoille voidaan estää.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical simulation of machining processes can be traced back to the early seventies when finite element models for continuous chip formation were proposed. The advent of fast computers and development of new techniques to model large plastic deformations have favoured machining simulation. Relevant aspects of finite element simulation of machining processes are discussed in this paper, such as solution methods, material models, thermo-mechanical coupling, friction models, chip separation and breakage strategies and meshing/re-meshing strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a formulation of the contact with friction between elastic bodies. This is a non linear problem due to unilateral constraints (inter-penetration of bodies) and friction. The solution of this problem can be found using optimization concepts, modelling the problem as a constrained minimization problem. The Finite Element Method is used to construct approximation spaces. The minimization problem has the total potential energy of the elastic bodies as the objective function, the non-inter-penetration conditions are represented by inequality constraints, and equality constraints are used to deal with the friction. Due to the presence of two friction conditions (stick and slip), specific equality constraints are present or not according to the current condition. Since the Coulomb friction condition depends on the normal and tangential contact stresses related to the constraints of the problem, it is devised a conditional dependent constrained minimization problem. An Augmented Lagrangian Method for constrained minimization is employed to solve this problem. This method, when applied to a contact problem, presents Lagrange Multipliers which have the physical meaning of contact forces. This fact allows to check the friction condition at each iteration. These concepts make possible to devise a computational scheme which lead to good numerical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis work, a strength analysis is made for a boat trailer. The studied trailer structure is manufactured from Ruukki’s structural steel S420. The main focus in this work is in the trailer’s frame. The investigation process consists two main stages. These stages are strain gage measurements and finite elements analysis. Strain gage measurements were performed to the current boat trailer in February 2015. Static durability and fatigue life of the trailer are analyzed with finite element analysis and with two different materials. These materials are the current trailer material Ruukki’s structural steel S420 and new option material high strength precision tube Form 800. The main target by using high strength steel in a trailer is weight reduction. The applied fatigue analysis methods are effective notch stress and structural hot spot stress approaches. The target of these strength analyses is to determine if it is reasonable to change the trailer material to high strength steel. The static strengths of the S420 and Form 800 trailers is sufficient. The fatigue strength of the Form 800 trailer is considerably lower than the fatigue strength of the S420 trailer. For future research, the effect of hot dip galvanization to the high strength steel has to be investigated. The effect of hot dip galvanization to the trailer is investigated by laboratory tests that are not included in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this thesis is to define and validate a software engineering approach for the development of a distributed system for the modeling of composite materials, based on the analysis of various existing software development methods. We reviewed the main features of: (1) software engineering methodologies; (2) distributed system characteristics and their effect on software development; (3) composite materials modeling activities and the requirements for the software development. Using the design science as a research methodology, the distributed system for creating models of composite materials is created and evaluated. Empirical experiments which we conducted showed good convergence of modeled and real processes. During the study, we paid attention to the matter of complexity and importance of distributed system and a deep understanding of modern software engineering methods and tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On étudie l’application des algorithmes de décomposition matricielles tel que la Factorisation Matricielle Non-négative (FMN), aux représentations fréquentielles de signaux audio musicaux. Ces algorithmes, dirigés par une fonction d’erreur de reconstruction, apprennent un ensemble de fonctions de base et un ensemble de coef- ficients correspondants qui approximent le signal d’entrée. On compare l’utilisation de trois fonctions d’erreur de reconstruction quand la FMN est appliquée à des gammes monophoniques et harmonisées: moindre carré, divergence Kullback-Leibler, et une mesure de divergence dépendente de la phase, introduite récemment. Des nouvelles méthodes pour interpréter les décompositions résultantes sont présentées et sont comparées aux méthodes utilisées précédemment qui nécessitent des connaissances du domaine acoustique. Finalement, on analyse la capacité de généralisation des fonctions de bases apprises par rapport à trois paramètres musicaux: l’amplitude, la durée et le type d’instrument. Pour ce faire, on introduit deux algorithmes d’étiquetage des fonctions de bases qui performent mieux que l’approche précédente dans la majorité de nos tests, la tâche d’instrument avec audio monophonique étant la seule exception importante.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les implications philosophiques de la Théorie de la Perspective de 1979, notamment celles qui concernent l’introduction d’une fonction de valeur sur les résultats et d’un coefficient de pondération sur les probabilités, n’ont à ce jour jamais été explorées. Le but de ce travail est de construire une théorie philosophique de la volonté à partir des résultats de la Théorie de la Perspective. Afin de comprendre comment cette théorie a pu être élaborée il faut étudier la Théorie de l’Utilité Attendue dont elle est l’aboutissement critique majeur, c’est-à-dire les axiomatisations de la décision de Ramsey (1926), von Neumann et Morgenstern (1947), et enfin Savage (1954), qui constituent les fondements de la théorie classique de la décision. C’est entre autres la critique – par l’économie et la psychologie cognitive – du principe d’indépendance, des axiomes d’ordonnancement et de transitivité qui a permis de faire émerger les éléments représentationnels subjectifs à partir desquels la Théorie de la Perspective a pu être élaborée. Ces critiques ont été menées par Allais (1953), Edwards (1954), Ellsberg (1961), et enfin Slovic et Lichtenstein (1968), l’étude de ces articles permet de comprendre comment s’est opéré le passage de la Théorie de l’Utilité Attendue, à la Théorie de la Perspective. À l’issue de ces analyses et de celle de la Théorie de la Perspective est introduite la notion de Système de Référence Décisionnel, qui est la généralisation naturelle des concepts de fonction de valeur et de coefficient de pondération issus de la Théorie de la Perspective. Ce système, dont le fonctionnement est parfois heuristique, sert à modéliser la prise de décision dans l’élément de la représentation, il s’articule autour de trois phases : la visée, l’édition et l’évaluation. À partir de cette structure est proposée une nouvelle typologie des décisions et une explication inédite des phénomènes d’akrasie et de procrastination fondée sur les concepts d’aversion au risque et de surévaluation du présent, tous deux issus de la Théorie de la Perspective.