973 resultados para multi-layer functionals
Resumo:
The purpose of this paper is to propose a Neural-Q_learning approach designed for online learning of simple and reactive robot behaviors. In this approach, the Q_function is generalized by a multi-layer neural network allowing the use of continuous states and actions. The algorithm uses a database of the most recent learning samples to accelerate and guarantee the convergence. Each Neural-Q_learning function represents an independent, reactive and adaptive behavior which maps sensorial states to robot control actions. A group of these behaviors constitutes a reactive control scheme designed to fulfill simple missions. The paper centers on the description of the Neural-Q_learning based behaviors showing their performance with an underwater robot in a target following task. Real experiments demonstrate the convergence and stability of the learning system, pointing out its suitability for online robot learning. Advantages and limitations are discussed
Resumo:
I use a multi-layer feedforward perceptron, with backpropagation learning implemented via stochastic gradient descent, to extrapolate the volatility smile of Euribor derivatives over low-strikes by training the network on parametric prices.
Resumo:
Many complex systems may be described by not one but a number of complex networks mapped on each other in a multi-layer structure. Because of the interactions and dependencies between these layers, the state of a single layer does not necessarily reflect well the state of the entire system. In this paper we study the robustness of five examples of two-layer complex systems: three real-life data sets in the fields of communication (the Internet), transportation (the European railway system), and biology (the human brain), and two models based on random graphs. In order to cover the whole range of features specific to these systems, we focus on two extreme policies of system's response to failures, no rerouting and full rerouting. Our main finding is that multi-layer systems are much more vulnerable to errors and intentional attacks than they appear from a single layer perspective.
Resumo:
In this paper, aggregate migration patterns during fluid concrete castings are studied through experiments, dimensionless approach and numerical modeling. The experimental results obtained on two beams show that gravity induced migration is primarily affecting the coarsest aggregates resulting in a decrease of coarse aggregates volume fraction with the horizontal distance from the pouring point and in a puzzling vertical multi-layer structure. The origin of this multi layer structure is discussed and analyzed with the help of numerical simulations of free surface flow. Our results suggest that it finds its origin in the non Newtonian nature of fresh concrete and that increasing casting rate shall decrease the magnitude of gravity induced particle migration. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
The markets of biomass for energy are developing rapidly and becoming more international. A remarkable increase in the use of biomass for energy needs parallel and positive development in several areas, and there will be plenty of challenges to overcome. The main objective of the study was to clarify the alternative future scenarios for the international biomass market until the year 2020, and based on the scenario process, to identify underlying steps needed towards the vital working and sustainable biomass market for energy purposes. Two scenario processes were conducted for this study. The first was carried out with a group of Finnish experts and thesecond involved an international group. A heuristic, semi-structured approach, including the use of preliminary questionnaires as well as manual and computerised group support systems (GSS), was applied in the scenario processes.The scenario processes reinforced the picture of the future of international biomass and bioenergy markets as a complex and multi-layer subject. The scenarios estimated that the biomass market will develop and grow rapidly as well as diversify in the future. The results of the scenario process also opened up new discussion and provided new information and collective views of experts for the purposes of policy makers. An overall view resulting from this scenario analysis are the enormous opportunities relating to the utilisation of biomass as a resource for global energy use in the coming decades. The scenario analysis shows the key issues in the field: global economic growth including the growing need for energy, environmental forces in the global evolution, possibilities of technological development to solve global problems, capabilities of the international community to find solutions for global issues and the complex interdependencies of all these driving forces. The results of the scenario processes provide a starting point for further research analysing the technological and commercial aspects related the scenarios and foreseeing the scales and directions of biomass streams.
Resumo:
Design aspects of the Transversally Laminated Anisotropic (TLA) Synchronous Reluctance Motor (SynRM) are studied and the machine performance analysis compared to the Induction Motor (IM) is done. The SynRM rotor structure is designed and manufactured for a30 kW, four-pole, three-phase squirrel cage induction motor stator. Both the IMand SynRM were supplied by a sensorless Direct Torque Controlled (DTC) variablespeed drive. Attention is also paid to the estimation of the power range where the SynRM may compete successfully with a same size induction motor. A technicalloss reduction comparison between the IM and SynRM in variable speed drives is done. The Finite Element Method (FEM) is used to analyse the number, location and width of flux barriers used in a multiple segment rotor. It is sought for a high saliency ratio and a high torque of the motor. It is given a comparison between different FEM calculations to analyse SynRM performance. The possibility to take into account the effect of iron losses with FEM is studied. Comparison between the calculated and measured values shows that the design methods are reliable. A new application of the IEEE 112 measurement method is developed and used especially for determination of stray load losses in laboratory measurements. The study shows that, with some special measures, the efficiency of the TLA SynRM is equivalent to that of a high efficiency IM. The power factor of the SynRM at rated load is smaller than that of the IM. However, at lower partial load this difference decreases and this, probably, brings that the SynRM gets a better power factor in comparison with the IM. The big rotor inductance ratio of the SynRM allows a good estimating of the rotor position. This appears to be very advantageous for the designing of the rotor position sensor-less motor drive. In using the FEM designed multi-layer transversally laminated rotor with damper windings it is possible to design a directly network driven motor without degrading the motorefficiency or power factor compared to the performance of the IM.
Resumo:
Suomen ilmatilaa valvotaan reaaliaikaisesti, pääasiassa ilmavalvontatutkilla. Ilmatilassa on lentokoneiden lisäksi paljon muitakin kohteita, jotka tutka havaitsee. Tutka lähettää nämä tiedot edelleen ilmavalvontajärjestelmään. Ilmavalvontajärjestelmä käsittelee tiedot, sekä lähettää ne edelleen esitysjärjestelmään. Esitysjärjestelmässä tiedot esitetään synteettisinä merkkeinä, seurantoina joista käytetään nimitystä träkki. Näiden tietojen puitteissa sekä oman ammattitaitonsa perusteella ihmiset tekevät päätöksiä. Tämän työn tarkoituksena on tutkia tutkan havaintoja träkkien initialisointipisteessä siten, että voitaisiin määritellä tyypillinen rakenne sille mikä on oikea ja mikä väärä tai huono träkki. Tämän lisäksi tulisi ennustaa, mitkä Irakeista eivät aiheudu ilma- aluksista. Saadut tulokset voivat helpottaa työtä havaintojen tulkinnassa - jokainen lintuparvi ei ole ehdokas seurannaksi. Havaintojen luokittelu voidaan tehdä joko neurolaskennalla tai päätöspuulla. Neurolaskenta tehdään neuroverkoilla, jotka koostuvat neuroneista. Päätöspuu- luokittelijat ovat oppivia tietorakenteita kuten neuroverkotkin. Yleisin päätöpuu on binääripuu. Tämän työn tavoitteena on opettaa päätöspuuluokittelija havaintojen avulla siten, että se pystyy luokittelemaan väärät havainnot oikeista. Neurolaskennan mahdollisuuksia tässä työssä ei käsitellä kuin teoreettisesti. Työn tuloksena voi todeta, että päätöspuuluokittelijat ovat erittäin kykeneviä erottamaan oikeat havainnot vääristä. Vaikka tulokset olivat rohkaiseva, lisää tutkimusta tarvitaan määrittelemään luotettavammin tekijät, jotka parhaiten suorittavat luokittelun.
Resumo:
Työn tavoitteena on selvittää voidaanko neuroverkkoa käyttää mallintamaan ja ennustamaan polttoaineen vaikutusta nykyaikaisen auton päästöihin. Näin pystyttäisiin vähentämään aikaa vievien ja kalliiden koeajojen tarvetta. Työ tehtiin Lappeenrannan teknillisen yliopiston ja Fortum Oy:n yhteistyöprojektissa. Työssä tehtiin kolme erilaista mallia. Ensimmäisenä tehtiin autokohtainen malli, jolla pyrittiin ennustamaan autokohtaista käyttäytymistä. Toiseksi kokeiltiin mallia, jossa automalli oli yhtenä syötteenä. Kolmantena yritettiin kiertää eräitä aineiston ongelmia käyttämällä "sumeutettuja" polttoaineiden koostumuksia. Työssä käytettiin MLP-neuroverkkoa, joka opetettiin backpropagation algoritmilla. Työssä havaittiin ettei käytettävissä olleella aineistolla ja käytetyillä malleilla pystytä riittävällä tarkkuudella mallintamaan polttoaineen vaikutusta päästöihin. Aineiston ongelmia olivat mm. suuret mittausvarianssit, aineiston pieni määrä sekä aineiston soveltumattomuus neuroverkolla mallintamiseen.
Resumo:
Työn tavoitteena oli kehittää prosessia fraktioinnista monikerrosperälaatikolle painopaperilajeilla. Tarkoituksena oli selvittää koeajojen avulla sihti- ja pyörrepuhdistusfraktioinnin soveltuvuutta paperin kerrostuksen kannalta. Työssä vertailtiin keskenään fraktiointimenetelmiä ja niiden yhdistelmiä. Tehtävänä oli prosessikonseptin kehittäminen eri prosessikytkennöistä ja –ratkaisuista simuloinnin avulla. Kirjallisuusosassa tutustuttiin analysoiden kirjallisuusviitteiden perusteella massan fraktiointiin ja paperin kerrostamiseen sekä fraktiointikerrostetun rainan karakterisointiin. Tavoitteiden saavuttamiseksi esikokeena suoritettiin pilotkoeajo hienopaperimassalla, jossa tutkittiin pääasiassa fraktiointitulosta. Toinen koeajo suoritettiin LWC-paperilla, jossa koekonekonsepti oli optimaalisempi kerrostuksen kannalta ja fraktiointitulos voitiin linkittää paperin laatusuureisiin. LWC-koeajossa fraktioidulla massalla tehtiin laboratoriomittakaavassa monikerrosarkkimuottikokeita, joiden tuloksilla pyrittiin vahvistamaan koeajosta saatuja tuloksia ja fraktioinnnin potentialia. Prosessikonseptin kehittämiseksi rakennettiin seitsemän simulointimallia eri kytkennöistä. Malleja verrattiin keskenään täyteaine- ja kuitujakeiden fraktiointikyvyn perusteella. Koeajojen avulla selvitettiin fraktioinnin kannalta optimaaliset prosessimuuttujat. Fraktiointikerrostuksella parannettiin paperin z-suuntaista lujuutta ja etenkin pyörrepuhdistinfraktioinnilla pintojen sileyttä. Fraktiointikerrostuksella voitiin parantaa paperin täyteainejakaumaa. Kokeiden perusteella huomattiin, että kukin paperilaji tarvitsee erilaisen fraktiointijärjestelyn riippuen käytetystä massasta ja täyteaineesta.
Resumo:
In this article we presents a project [1] developed to demonstrate the capability that Multi-Layer Perceptrons (MLP) have to approximate non-linear functions [2]. The simulation has been implemented in Java to be used in all the computers by Internet [3], with a simple operation and pleasant interface. The power of the simulations is in the possibility of the user of seeing the evolutions of the approaches, the contribution of each neuron, the control of the different parameters, etc. In addition, to guide the user during the simulation, an online help has been implemented.
Resumo:
Within the latest decade high-speed motor technology has been increasingly commonly applied within the range of medium and large power. More particularly, applications like such involved with gas movement and compression seem to be the most important area in which high-speed machines are used. In manufacturing the induction motor rotor core of one single piece of steel it is possible to achieve an extremely rigid rotor construction for the high-speed motor. In a mechanical sense, the solid rotor may be the best possible rotor construction. Unfortunately, the electromagnetic properties of a solid rotor are poorer than the properties of the traditional laminated rotor of an induction motor. This thesis analyses methods for improving the electromagnetic properties of a solid-rotor induction machine. The slip of the solid rotor is reduced notably if the solid rotor is axially slitted. The slitting patterns of the solid rotor are examined. It is shown how the slitting parameters affect the produced torque. Methods for decreasing the harmonic eddy currents on the surface of the rotor are also examined. The motivation for this is to improve the efficiency of the motor to reach the efficiency standard of a laminated rotor induction motor. To carry out these research tasks the finite element analysis is used. An analytical calculation of solid rotors based on the multi-layer transfer-matrix method is developed especially for the calculation of axially slitted solid rotors equipped with wellconducting end rings. The calculation results are verified by using the finite element analysis and laboratory measurements. The prototype motors of 250 – 300 kW and 140 Hz were tested to verify the results. Utilization factor data are given for several other prototypes the largest of which delivers 1000 kW at 12000 min-1.
Resumo:
Neural Networks are a set of mathematical methods and computer programs designed to simulate the information process and the knowledge acquisition of the human brain. In last years its application in chemistry is increasing significantly, due the special characteristics for model complex systems. The basic principles of two types of neural networks, the multi-layer perceptrons and radial basis functions, are introduced, as well as, a pruning approach to architecture optimization. Two analytical applications based on near infrared spectroscopy are presented, the first one for determination of nitrogen content in wheat leaves using multi-layer perceptrons networks and second one for determination of BRIX in sugar cane juices using radial basis functions networks.
Resumo:
Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.
Resumo:
Mass-produced paper electronics (large area organic printed electronics on paper-based substrates, “throw-away electronics”) has the potential to introduce the use of flexible electronic applications in everyday life. While paper manufacturing and printing have a long history, they were not developed with electronic applications in mind. Modifications to paper substrates and printing processes are required in order to obtain working electronic devices. This should be done while maintaining the high throughput of conventional printing techniques and the low cost and recyclability of paper. An understanding of the interactions between the functional materials, the printing process and the substrate are required for successful manufacturing of advanced devices on paper. Based on the understanding, a recyclable, multilayer-coated paper-based substrate that combines adequate barrier and printability properties for printed electronics and sensor applications was developed in this work. In this multilayer structure, a thin top-coating consisting of mineral pigments is coated on top of a dispersion-coated barrier layer. The top-coating provides well-controlled sorption properties through controlled thickness and porosity, thus enabling optimizing the printability of functional materials. The penetration of ink solvents and functional materials stops at the barrier layer, which not only improves the performance of the functional material but also eliminates potential fiber swelling and de-bonding that can occur when the solvents are allowed to penetrate into the base paper. The multi-layer coated paper under consideration in the current work consists of a pre-coating and a smoothing layer on which the barrier layer is deposited. Coated fine paper may also be used directly as basepaper, ensuring a smooth base for the barrier layer. The top layer is thin and smooth consisting of mineral pigments such as kaolin, precipitated calcium carbonate, silica or blends of these. All the materials in the coating structure have been chosen in order to maintain the recyclability and sustainability of the substrate. The substrate can be coated in steps, sequentially layer by layer, which requires detailed understanding and tuning of the wetting properties and topography of the barrier layer versus the surface tension of the top-coating. A cost competitive method for industrial scale production is the curtain coating technique allowing extremely thin top-coatings to be applied simultaneously with a closed and sealed barrier layer. The understanding of the interactions between functional materials formulated and applied on paper as inks, makes it possible to create a paper-based substrate that can be used to manufacture printed electronics-based devices and sensors on paper. The multitude of functional materials and their complex interactions make it challenging to draw general conclusions in this topic area. Inevitably, the results become partially specific to the device chosen and the materials needed in its manufacturing. Based on the results, it is clear that for inks based on dissolved or small size functional materials, a barrier layer is beneficial and ensures the functionality of the printed material in a device. The required active barrier life time depends on the solvents or analytes used and their volatility. High aspect ratio mineral pigments, which create tortuous pathways and physical barriers within the barrier layer limit the penetration of solvents used in functional inks. The surface pore volume and pore size can be optimized for a given printing process and ink through a choice of pigment type and coating layer thickness. However, when manufacturing multilayer functional devices, such as transistors, which consist of several printed layers, compromises have to be made. E.g., while a thick and porous top-coating is preferable for printing of source and drain electrodes with a silver particle ink, a thinner and less absorbing surface is required to form a functional semiconducting layer. With the multilayer coating structure concept developed in this work, it was possible to make the paper substrate suitable for printed functionality. The possibility of printing functional devices, such as transistors, sensors and pixels in a roll-to-roll process on paper is demonstrated which may enable introducing paper for use in disposable “onetime use” or “throwaway” electronics and sensors, such as lab-on-strip devices for various analyses, consumer packages equipped with product quality sensors or remote tracking devices.