814 resultados para probabilistic neural network
Resumo:
In this work we explore the multivariate empirical mode decomposition combined with a Neural Network classifier as technique for face recognition tasks. Images are simultaneously decomposed by means of EMD and then the distance between the modes of the image and the modes of the representative image of each class is calculated using three different distance measures. Then, a neural network is trained using 10- fold cross validation in order to derive a classifier. Preliminary results (over 98 % of classification rate) are satisfactory and will justify a deep investigation on how to apply mEMD for face recognition.
Resumo:
Intellectual disability has long been associated with deficits in socio-emotional processing. However, studies investigating brain dynamics of maladaptive socio-emotional skills associated with intellectual disability are scarce. Here, we compared differences in brain activity between low intelligence quotient (I.Q.<75, N=13) and normal controls (N=15) while evaluating their subjective emotions. Positive (P) and negative (N) valenced pictures were presented one at a time to participants of both groups, at a rate of ¾. The task required that each participant evaluate their subjective emotion and press a predefined push-button when done, alternatively P and N. Electroencephalographic (EEG) signals were continuously recorded, and the 1000ms time window following each picture was analyzed offline for power in frequency domain. Alpha low (8-10Hz) and upper (10-13Hz) frequency bands were then compared for both groups and for both P and N emotions in 12 distributed scalp electrodes. The qualitative evaluation of emotions was similar between both groups, with constant longer reaction times for the low IQ participants. The EEG signal comparison shows marked power decrease in upper alpha frequency range for N emotions in low intelligence group. Otherwise no significant difference was noticed between low and normal IQ. Main findings of the present study are (1) results do not support the hypothesis that impairment in developmental intelligence roots in maladaptive emotional processing; (2) the strong alpha power suppression during negative-induced emotions suggests the involvement of an extended neural network and more effortful inhibition processes than positive ones. We call for further studies with a larger sample.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Methods used to analyze one type of nonstationary stochastic processes?the periodically correlated process?are considered. Two methods of one-step-forward prediction of periodically correlated time series are examined. One-step-forward predictions made in accordance with an autoregression model and a model of an artificial neural network with one latent neuron layer and with an adaptation mechanism of network parameters in a moving time window were compared in terms of efficiency. The comparison showed that, in the case of prediction for one time step for time series of mean monthly water discharge, the simpler autoregression model is more efficient.
Resumo:
The objective of this work was to evaluate sampling density on the prediction accuracy of soil orders, with high spatial resolution, in a viticultural zone of Serra Gaúcha, Southern Brazil. A digital elevation model (DEM), a cartographic base, a conventional soil map, and the Idrisi software were used. Seven predictor variables were calculated and read along with soil classes in randomly distributed points, with sampling densities of 0.5, 1, 1.5, 2, and 4 points per hectare. Data were used to train a decision tree (Gini) and three artificial neural networks: adaptive resonance theory, fuzzy ARTMap; self‑organizing map, SOM; and multi‑layer perceptron, MLP. Estimated maps were compared with the conventional soil map to calculate omission and commission errors, overall accuracy, and quantity and allocation disagreement. The decision tree was less sensitive to sampling density and had the highest accuracy and consistence. The SOM was the less sensitive and most consistent network. The MLP had a critical minimum and showed high inconsistency, whereas fuzzy ARTMap was more sensitive and less accurate. Results indicate that sampling densities used in conventional soil surveys can serve as a reference to predict soil orders in Serra Gaúcha.
Resumo:
The paper deals with the development and application of the methodology for automatic mapping of pollution/contamination data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve this problem. The automatic tuning of isotropic and an anisotropic GRNN model using cross-validation procedure is presented. Results are compared with k-nearest-neighbours interpolation algorithm using independent validation data set. Quality of mapping is controlled by the analysis of raw data and the residuals using variography. Maps of probabilities of exceeding a given decision level and ?thick? isoline visualization of the uncertainties are presented as examples of decision-oriented mapping. Real case study is based on mapping of radioactively contaminated territories.
Resumo:
The objective of this work was to develop, validate, and compare 190 artificial intelligence-based models for predicting the body mass of chicks from 2 to 21 days of age subjected to different duration and intensities of thermal challenge. The experiment was conducted inside four climate-controlled wind tunnels using 210 chicks. A database containing 840 datasets (from 2 to 21-day-old chicks) - with the variables dry-bulb air temperature, duration of thermal stress (days), chick age (days), and the daily body mass of chicks - was used for network training, validation, and tests of models based on artificial neural networks (ANNs) and neuro-fuzzy networks (NFNs). The ANNs were most accurate in predicting the body mass of chicks from 2 to 21 days of age after they were subjected to the input variables, and they showed an R² of 0.9993 and a standard error of 4.62 g. The ANNs enable the simulation of different scenarios, which can assist in managerial decision-making, and they can be embedded in the heating control systems.
Resumo:
Työn tavoitteena on tutkia deittipalvelun käyttäjien anonyymiaineistoa neuroverkko-opetuksessa segmentoituneiden piirrekarttojen (SOM, Self-Organizing Map) avulla. Näiden piirrekarttojen avulla on tarkoitus selvittää, löytyykö mahdollisesti selkeitä SMS- ja e-mail - käyttäjäryhmiä. Tutkimusta lähestytään perehtymällä ensin yrityksen tekniseen palvelualusta-arkkitehtuuriin ja myös varsinaiseen deittipalveluun käyttäjän kannalta.Tutkimus aloitettiin koodaamalla tietoaineisto SOM Toolbox-ohjelmalle käytettäväksi. Varsinaisia tutkimustuloksia analysoitiin valitsemalla otoksia neuroverkko-opetuksessa segmentoituneista piirrekartoista. Saadut tulokset osoittavat, ettäSOM-teknologia soveltuu hyvin sisältöpalveluiden sosioteknologiseen tutkimukseen ja sitä on myös mahdollista käyttää asiakkuudenhallinnassa erilaisten käyttäjäryhmien profilointiin.
Resumo:
This paper presents a novel image classification scheme for benthic coral reef images that can be applied to both single image and composite mosaic datasets. The proposed method can be configured to the characteristics (e.g., the size of the dataset, number of classes, resolution of the samples, color information availability, class types, etc.) of individual datasets. The proposed method uses completed local binary pattern (CLBP), grey level co-occurrence matrix (GLCM), Gabor filter response, and opponent angle and hue channel color histograms as feature descriptors. For classification, either k-nearest neighbor (KNN), neural network (NN), support vector machine (SVM) or probability density weighted mean distance (PDWMD) is used. The combination of features and classifiers that attains the best results is presented together with the guidelines for selection. The accuracy and efficiency of our proposed method are compared with other state-of-the-art techniques using three benthic and three texture datasets. The proposed method achieves the highest overall classification accuracy of any of the tested methods and has moderate execution time. Finally, the proposed classification scheme is applied to a large-scale image mosaic of the Red Sea to create a completely classified thematic map of the reef benthos
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
Superheater corrosion causes vast annual losses for the power companies. With a reliable corrosion prediction method, the plants can be designed accordingly, and knowledge of fuel selection and determination of process conditions may be utilized to minimize superheater corrosion. Growing interest to use recycled fuels creates additional demands for the prediction of corrosion potential. Models depending on corrosion theories will fail, if relations between the inputs and the output are poorly known. A prediction model based on fuzzy logic and an artificial neural network is able to improve its performance as the amount of data increases. The corrosion rate of a superheater material can most reliably be detected with a test done in a test combustor or in a commercial boiler. The steel samples can be located in a special, temperature-controlled probe, and exposed to the corrosive environment for a desired time. These tests give information about the average corrosion potential in that environment. Samples may also be cut from superheaters during shutdowns. The analysis ofsamples taken from probes or superheaters after exposure to corrosive environment is a demanding task: if the corrosive contaminants can be reliably analyzed, the corrosion chemistry can be determined, and an estimate of the material lifetime can be given. In cases where the reason for corrosion is not clear, the determination of the corrosion chemistry and the lifetime estimation is more demanding. In order to provide a laboratory tool for the analysis and prediction, a newapproach was chosen. During this study, the following tools were generated: · Amodel for the prediction of superheater fireside corrosion, based on fuzzy logic and an artificial neural network, build upon a corrosion database developed offuel and bed material analyses, and measured corrosion data. The developed model predicts superheater corrosion with high accuracy at the early stages of a project. · An adaptive corrosion analysis tool based on image analysis, constructedas an expert system. This system utilizes implementation of user-defined algorithms, which allows the development of an artificially intelligent system for thetask. According to the results of the analyses, several new rules were developed for the determination of the degree and type of corrosion. By combining these two tools, a user-friendly expert system for the prediction and analyses of superheater fireside corrosion was developed. This tool may also be used for the minimization of corrosion risks by the design of fluidized bed boilers.
Resumo:
The present study was done with two different servo-systems. In the first system, a servo-hydraulic system was identified and then controlled by a fuzzy gainscheduling controller. The second servo-system, an electro-magnetic linear motor in suppressing the mechanical vibration and position tracking of a reference model are studied by using a neural network and an adaptive backstepping controller respectively. Followings are some descriptions of research methods. Electro Hydraulic Servo Systems (EHSS) are commonly used in industry. These kinds of systems are nonlinearin nature and their dynamic equations have several unknown parameters.System identification is a prerequisite to analysis of a dynamic system. One of the most promising novel evolutionary algorithms is the Differential Evolution (DE) for solving global optimization problems. In the study, the DE algorithm is proposed for handling nonlinear constraint functionswith boundary limits of variables to find the best parameters of a servo-hydraulic system with flexible load. The DE guarantees fast speed convergence and accurate solutions regardless the initial conditions of parameters. The control of hydraulic servo-systems has been the focus ofintense research over the past decades. These kinds of systems are nonlinear in nature and generally difficult to control. Since changing system parameters using the same gains will cause overshoot or even loss of system stability. The highly non-linear behaviour of these devices makes them ideal subjects for applying different types of sophisticated controllers. The study is concerned with a second order model reference to positioning control of a flexible load servo-hydraulic system using fuzzy gainscheduling. In the present research, to compensate the lack of dampingin a hydraulic system, an acceleration feedback was used. To compare the results, a pcontroller with feed-forward acceleration and different gains in extension and retraction is used. The design procedure for the controller and experimental results are discussed. The results suggest that using the fuzzy gain-scheduling controller decrease the error of position reference tracking. The second part of research was done on a PermanentMagnet Linear Synchronous Motor (PMLSM). In this study, a recurrent neural network compensator for suppressing mechanical vibration in PMLSM with a flexible load is studied. The linear motor is controlled by a conventional PI velocity controller, and the vibration of the flexible mechanism is suppressed by using a hybrid recurrent neural network. The differential evolution strategy and Kalman filter method are used to avoid the local minimum problem, and estimate the states of system respectively. The proposed control method is firstly designed by using non-linear simulation model built in Matlab Simulink and then implemented in practical test rig. The proposed method works satisfactorily and suppresses the vibration successfully. In the last part of research, a nonlinear load control method is developed and implemented for a PMLSM with a flexible load. The purpose of the controller is to track a flexible load to the desired position reference as fast as possible and without awkward oscillation. The control method is based on an adaptive backstepping algorithm whose stability is ensured by the Lyapunov stability theorem. The states of the system needed in the controller are estimated by using the Kalman filter. The proposed controller is implemented and tested in a linear motor test drive and responses are presented.
Resumo:
Diplomityön teoriaosassa tutkittiin monimedian jakelukanavia ja niiden ominaisuuksia sisältöpalveluissa. Työssä esiteltiin keinoja älykkyyden lisäämiseksi monnimediasisältötuotannossa sekä tarkasteltiin sisältöpalvelujen käytettävyyttä. Työssä keskityttiin neuroverkkoteknologiaan, sen toteuttamiseen sekä ohjelmisto-agentteihin. Empiirisessä osassa tutustuttiin työministeriön AVO-ammatinvalintaohjelman toimintaan. Työssä määriteltiin Excel-taulukkoon 280 ammatin ominaisuudet, jotka pohjautuivat AVO:n 122 kysymykseen. Työministeriöstä on saatu 5115 henkilön vastaukset AVO-ammatinvalintaohjelman kysymyksiin. Tätä vastausaineistoa ja tutkimuksessa laadittua ammattitaulukkoa käytettiin neuroverkon opettamiseen. Lopuksi analysoitiin SOM-karttoja. Analyysin tarkoituksena oli tutkia laaditun ammattitaulukon oikeellisuutta ja eri ammattien sijoittumista SOM-kartalle. Tutkimus osoitti, että neuroverkkoteknologia soveltuisi uuden urasuunnittelupalvelun ydinteknologiaksi.
Resumo:
Vaikka keraamisten laattojen valmistusprosessi onkin täysin automatisoitu, viimeinen vaihe eli laaduntarkistus ja luokittelu tehdään yleensä ihmisvoimin. Automaattinen laaduntarkastus laattojen valmistuksessa voidaan perustella taloudellisuus- ja turvallisuusnäkökohtien avulla. Tämän työn tarkoituksena on kuvata tutkimusprojektia keraamisten laattojen luokittelusta erilaisten väripiirteiden avulla. Oleellisena osana tutkittiin RGB- ja spektrikuvien välistä eroa. Työn teoreettinen osuus käy läpi aiemmin aiheesta tehdyn tutkimuksen sekä antaa taustatietoa konenäöstä, hahmontunnistuksesta, luokittelijoista sekä väriteoriasta. Käytännön osan aineistona oli 25 keraamista laattaa, jotka olivat viidestä eri luokasta. Luokittelussa käytettiin apuna k:n lähimmän naapurin (k-NN) luokittelijaa sekä itseorganisoituvaa karttaa (SOM). Saatuja tuloksia verrattiin myös ihmisten tekemään luokitteluun. Neuraalilaskenta huomattiin tärkeäksi työkaluksi spektrianalyysissä. SOM:n ja spektraalisten piirteiden avulla saadut tulokset olivat lupaavia ja ainoastaan kromatisoidut RGB-piirteet olivat luokittelussa parempia kuin nämä.
Resumo:
Suomen ilmatilaa valvotaan reaaliaikaisesti, pääasiassa ilmavalvontatutkilla. Ilmatilassa on lentokoneiden lisäksi paljon muitakin kohteita, jotka tutka havaitsee. Tutka lähettää nämä tiedot edelleen ilmavalvontajärjestelmään. Ilmavalvontajärjestelmä käsittelee tiedot, sekä lähettää ne edelleen esitysjärjestelmään. Esitysjärjestelmässä tiedot esitetään synteettisinä merkkeinä, seurantoina joista käytetään nimitystä träkki. Näiden tietojen puitteissa sekä oman ammattitaitonsa perusteella ihmiset tekevät päätöksiä. Tämän työn tarkoituksena on tutkia tutkan havaintoja träkkien initialisointipisteessä siten, että voitaisiin määritellä tyypillinen rakenne sille mikä on oikea ja mikä väärä tai huono träkki. Tämän lisäksi tulisi ennustaa, mitkä Irakeista eivät aiheudu ilma- aluksista. Saadut tulokset voivat helpottaa työtä havaintojen tulkinnassa - jokainen lintuparvi ei ole ehdokas seurannaksi. Havaintojen luokittelu voidaan tehdä joko neurolaskennalla tai päätöspuulla. Neurolaskenta tehdään neuroverkoilla, jotka koostuvat neuroneista. Päätöspuu- luokittelijat ovat oppivia tietorakenteita kuten neuroverkotkin. Yleisin päätöpuu on binääripuu. Tämän työn tavoitteena on opettaa päätöspuuluokittelija havaintojen avulla siten, että se pystyy luokittelemaan väärät havainnot oikeista. Neurolaskennan mahdollisuuksia tässä työssä ei käsitellä kuin teoreettisesti. Työn tuloksena voi todeta, että päätöspuuluokittelijat ovat erittäin kykeneviä erottamaan oikeat havainnot vääristä. Vaikka tulokset olivat rohkaiseva, lisää tutkimusta tarvitaan määrittelemään luotettavammin tekijät, jotka parhaiten suorittavat luokittelun.