992 resultados para Machine theory.
Resumo:
"Report No. UIUCDCS-R-75-692"
Resumo:
"AD735159."
Resumo:
Mode of access: Internet.
Resumo:
This paper introduces a screw theory based method termed constraint and position identification (CPI) approach to synthesize decoupled spatial translational compliant parallel manipulators (XYZ CPMs) with consideration of actuation isolation. The proposed approach is based on a systematic arrangement of rigid stages and compliant modules in a three-legged XYZ CPM system using the constraint spaces and the position spaces of the compliant modules. The constraint spaces and the position spaces are firstly derived based on the screw theory instead of using the rigid-body mechanism design experience. Additionally, the constraint spaces are classified into different constraint combinations, with typical position spaces depicted via geometric entities. Furthermore, the systematic synthesis process based on the constraint combinations and the geometric entities is demonstrated via several examples. Finally, several novel decoupled XYZ CPMs with monolithic configurations are created and verified by finite elements analysis. The present CPI approach enables experts and beginners to synthesize a variety of decoupled XYZ CPMs with consideration of actuation isolation by selecting an appropriate constraint and an optimal position for each of the compliant modules according to a specific application.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
Mode of access: Internet.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Tämän diplomityön tarkoituksena oli asentaa ja kehittää UPM-Kymmene Rauman tehtaiden PK2:lle luotettava ja toimiva ilmanläpäisyprofiilin mittaus. Työn tarkoituksena oli myös kartoittaa tietoja ilmanläpäisyprofiiliin vaikuttavista seikoista sekä mahdollisuuksista säätää ilmanläpäisyprofiilia. Lisäksi työn tavoitteena oli löytää toimiva menetelmä verrata online mitattua ilmanläpäisyprofiilia laboratoriossa mitattuun ilmanläpäisyprofiiliin online mittauksen luotettavuuden näkökulmasta. Työn aikana selvitettiin myös ilmanläpäisyprofiilin mittauksen mahdollisuutta superkalanteroidusta SC paperista. Työn kirjallisessa osassa käsiteltiin paperin huokoisuutta ja ilmanläpäisyä, huokoisuustasoon ja huokoisuus profiiliin vaikuttavia tekijöitä, ilmanläpäisevyyden merkitystä SC paperin valmistukselle ja SC paperin painettavuudelle sekä ilmanläpäisevyyden laboratorio- sekä online mittausta. Kirjallisessa osassa käsiteltiin myös Honeywellin uutta Poros ilmanläpäisysensoria. Työn kokeellisessa osassa tutkittiin ilmanläpäisyprofiilimittauksen luotettavuutta CD ja MD suunnassa sekä laboratorio- ja online-mittausten välisiä eroavaisuuksia ennen PK2 uusintaa. PK2 uusinnan jälkeen kokeellisessa osassa keskityttiin luotettavuuden varmentamiseen CD ja MD suunnassa sekä kartoittamaan tekijöitä, jotka aiheuttivat pieniä tasoeroja mittausten välillä. Mittaukset todettiin luotettaviksi ja niiden pohjalta suoritettiin mitattujen online profiilien välisiä vertailuja. Ilmanläpäisyprofiililla todettiin olevan positiivinen korrelaatio neliömassaprofiilin ja kosteusprofiilin kanssa sekä negatiivinen korrelaatio tuhkaprofiilin kanssa. PK2 uusinnan yhteydessä tehtiin koeajoja, jotka liittyivät laadunoptimointiin viira- ja puristinosalla. Koeajoissa tutkittujen asioiden lisäksi tutkittiin muutettujen parametrien vaikutusta ilmanläpäisyprofiiliin. Höyrylaatikon vaikutusta ilmanläpäisyprofiiliin tutkittiin säätöjen virityksen yhteydessä.
Resumo:
The aim of this thesis is to utilize the technology developed at LUT and to provide an easy tool for high-speed solid-rotor induction machine preliminary design. Computer aided design tool MathCAD has been chosen as the environment for realizing the calculation program. Four versions of the design program have been made depending on the motor rotor type. The first rotor type is an axially slitted solid-rotor with steel end rings. The next one is an axially slitted solid-rotor with copper end rings. The third machine type is a solid rotor with deep, rectangular copper bars and end rings (squirrel cage). And the last one is a solid-rotor with round copper bars and end rings (squirrel cage). Each type of rotor has its own specialties but a general thread of design is common. This paper follows the structure of the calculating program and explains some features and formulas. The attention is concentrated on the difference between laminated and solid-rotor machine design principles. There is no deep analysis of the calculation ways are presented. References for all solution methods appearing during the design procedure are given for more detailed studying. This thesis pays respect to the latest innovations in solid-rotor machines theory. Rotor ends’ analytical calculation follows the latest knowledge in this field. Correction factor for adjusting the rotor impedance is implemented. The purpose of the created design program is to calculate the preliminary dimensions of the machine according to initial data. Obtained results are not recommended for exact machine development. Further more detailed design should be done in a finite element method application. Hence, this thesis is a practical tool for the prior evaluating of the high-speed machine with different solid-rotor types parameters.
Resumo:
Tarkastelen tutkielmassani Dan Simmonsin kaksiosaista tieteisfiktioteosta, joka koostuu romaaneista Hyperion ja The Fall of Hyperion. Keskityn teoksissa esiintyvään Shrike-hirviöön, joka edustaa ihmiskunnan pelkäämää potentiaalista konfliktia ihmisten ja koneiden välillä. Pelko ja konflikti ovat keskeisiä teemoja paitsi tieteisfiktiossa, myös hirviöteoksissa yleensä, ja näiden kahden käyttö samassa kertomuksessa luo otolliset edellytykset nyky-yhteiskunnan ahdistusten kuvaamiseen. Hirviöitä ja tieteisfiktiota on tätä nykyä tutkittu melko laajalti, mutta Shrike on aiemmin jäänyt vähälle huomiolle. Lähtökohtaisen teoreettisen viitekehyksen tutkimukselleni ovat luoneet Jeffrey Cohenin Monster Theory: Reading Culture, Stephen Asman On Monsters: An Unnatural History of Our Worst Fears sekä Holly Lynn Baumgartnerin ja Roger Davisin At the Interface: Hosting the Monster. Teoksista kokoamani hirviöteorian kautta tarkastelen sitä, miten Shriken puoliksi orgaaninen ja puoliksi keinotekoinen keho heijastaa niitä romaaneissa esiintyviä osa-alueita, joista tulevaisuudenpelko ja ihmisten ja koneiden väliseen konfliktin uhka koostuu. Koska Shrike on puoliksi orgaaninen ja puoliksi keinotekoinen, se on näiden ominaisuuksien kynnyksellä; tässä risteytyneessä kehossa yhdistyvät molemmat ääripäät, jolloin tämä keho myös symboloi osapuolten välistä konfliktia. Konfliktin lisäksi Shrike ilmentää niitä vastakkaisuuksia, joista ihmisten ja koneiden välisen konfliktin pelko rakentuu: itseyttä ja toiseutta, houkuttelevuutta ja luotaantyöntävyyttä, menneisyyttä ja tulevaisuutta sekä utopiaa ja dystopiaa.
Resumo:
This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.
Resumo:
We introduce a procedure to infer the repeated-game strategies that generate actions in experimental choice data. We apply the technique to set of experiments where human subjects play a repeated Prisoner's Dilemma. The technique suggests that two types of strategies underly the data.