875 resultados para Machine Learning Robotics Artificial Intelligence Bayesian Networks
Resumo:
Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models - in the form of Bayesian networks - address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates useable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).
Resumo:
Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
Des dels inicis dels ordinadors com a màquines programables, l’home ha intentat dotar-los de certa intel•ligència per tal de pensar o raonar el més semblant possible als humans. Un d’aquests intents ha sigut fer que la màquina sigui capaç de pensar de tal manera que estudiï jugades i guanyi partides d’escacs. En l’actualitat amb els actuals sistemes multi tasca, orientat a objectes i accés a memòria i gràcies al potent hardware del que disposem, comptem amb una gran varietat de programes que es dediquen a jugar a escacs. Però no hi ha només programes petits, hi ha fins i tot màquines senceres dedicades a calcular i estudiar jugades per tal de guanyar als millors jugadors del món. L’objectiu del meu treball és dur a terme un estudi i implementació d’un d’aquests programes, per això es divideix en dues parts. La part teòrica o de l’estudi, consta d’un estudi dels sistemes d’intel•ligència artificial que es dediquen a jugar a escacs, estudi i cerca d’una funció d’avaluació vàlida i estudi dels algorismes de cerca. La part pràctica del treball es basa en la implementació d’un sistema intel•ligent capaç de jugar a escacs amb certa lògica. Aquesta implementació es porta a terme amb l’ajuda de les llibreries SDL, utilitzant l’algorisme minimax amb poda alfa-beta i codi c++. Com a conclusió del projecte m’agradaria remarcar que l’estudi realitzat m’ha deixat veure que crear un joc d’escacs no era tan fàcil com jo pensava però m’ha aportat la satisfacció d’aplicar tot el que he après durant la carrera i de descobrir moltes altres coses noves.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. The paper considers a data driven approach in modelling uncertainty in spatial predictions. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic features and describe stochastic variability and non-uniqueness of spatial properties. It is able to capture and preserve key spatial dependencies such as connectivity, which is often difficult to achieve with two-point geostatistical models. Semi-supervised SVR is designed to integrate various kinds of conditioning data and learn dependences from them. A stochastic semi-supervised SVR model is integrated into a Bayesian framework to quantify uncertainty with multiple models fitted to dynamic observations. The developed approach is illustrated with a reservoir case study. The resulting probabilistic production forecasts are described by uncertainty envelopes.
Resumo:
Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. In this paper, we focus on the prediction of drug concentrations using Support Vector Machines (S VM) and the analysis of the influence of each feature to the prediction results. Our study shows that SVM-based approaches achieve similar prediction results compared with pharmacokinetic model. The two proposed example-based SVM methods demonstrate that the individual features help to increase the accuracy in the predictions of drug concentration with a reduced library of training data.
Resumo:
The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.
Resumo:
Organisations in Multi-Agent Systems (MAS) have proven to be successful in regulating agent societies. Nevertheless, changes in agents' behaviour or in the dynamics of the environment may lead to a poor fulfilment of the system's purposes, and so the entire organisation needs to be adapted. In this paper we focus on endowing the organisation with adaptation capabilities, instead of expecting agents to be capable of adapting the organisation by themselves. We regard this organisational adaptation as an assisting service provided by what we call the Assistance Layer. Our generic Two Level Assisted MAS Architecture (2-LAMA) incorporates such a layer. We empirically evaluate this approach by means of an agent-based simulator we have developed for the P2P sharing network domain. This simulator implements 2-LAMA architecture and supports the comparison between different adaptation methods, as well as, with the standard BitTorrent protocol. In particular, we present two alternatives to perform norm adaptation and one method to adapt agents'relationships. The results show improved performance and demonstrate that the cost of introducing an additional layer in charge of the system's adaptation is lower than its benefits.
Resumo:
We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.
Resumo:
We present a new framework for large-scale data clustering. The main idea is to modify functional dimensionality reduction techniques to directly optimize over discrete labels using stochastic gradient descent. Compared to methods like spectral clustering our approach solves a single optimization problem, rather than an ad-hoc two-stage optimization approach, does not require a matrix inversion, can easily encode prior knowledge in the set of implementable functions, and does not have an ?out-of-sample? problem. Experimental results on both artificial and real-world datasets show the usefulness of our approach.
Resumo:
Our work is focused on alleviating the workload for designers of adaptive courses on the complexity task of authoring adaptive learning designs adjusted to specific user characteristics and the user context. We propose an adaptation platform that consists in a set of intelligent agents where each agent carries out an independent adaptation task. The agents apply machine learning techniques to support the user modelling for the adaptation process
Resumo:
Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.
Resumo:
Tämä diplomityö määrittelee teknologiaseurantaprosessin, jolla korkean teknologian yritys voi ohjata toimintaansa. Korkean teknologian yrityksille on olennaista seurata teknologian kehitystä. Tällaiset yritykset tarvitsevat hyvin määritellyn järjestelmän, jolla ne voivat seurata ja ennustaa teknologista kehitystä.Työssä esitetään, että teknologiaseuranta ja kilpailuseuranta (competitive intelligence) ovat business intelligencen osa-alueita, jotka täydentävät ja tukevat toisiaan. Tärkeä havainto on, että business intelligence -prosessi on ennen kaikkea organisaation oppimisprosessi. Tästä seuraa, että minkä tahansa BI-prosessin tulisi perustua niihin prosesseihin, joiden avulla organisaatiot oppivat. Työssä esitetään myös, miten business intelligence, tietojohtaminen (knowledge management) ja organisaatioiden oppiminen liittyvät toisiinsa.Teknologiaseuranta on elintärkeä toiminto korkean teknologian yritykselle; sitä tarvitaan monella strategisen johtamisen osa-alueella, ainakin teknologia-, markkinointi- ja henkilöstöjohtamisessa. Teknologiaseurannan havaitaan myös olevan korkean teknologian yritykselle erittäin tärkeä ydinosaamisalue, jota ei voi kokonaan ulkoistaa.Työssä esitellään teknologiaseurantaprosessi, joka perustuu yleiselle business intelligence -prosessille ja siitä johdetulle kilpailuseurantaprosessille. Työssä myös esitetään ehdotus siitä, kuinka teknologiaseuranta voitaisiin järjestää korkean teknologian yrityksessä. Esitetty ratkaisu perustuu Community of practice -käsitteeseen. Community of practice on vapaaehtoisuuteen perustuva tiimi, jonka jäseniä yhdistää kiinnostus johonkin asiaan ja oppimishalu. Esimerkkiyrityksessä on tunnistettu selkeä tarve yhtenäiseen ja koordinoituun teknologiaseurantaan. Työssä esitetään alustava teknologiaseurantaprosessi esimerkkiyritykselle ja tunnistetaan teknologiaseurantaprosessin asiakkaat ja tekijät.
Resumo:
L'objectiu fonamental d'aquest article és mostrar com les técniques desenvolupades en intel'ligéncia artificial (lA) són d'una gran utilitat per tal de millorar el software destinat a I'ambit educatiu. Per a aixó, en primer Iloc, s'hi fa un breu resum de les finalitats i els objectius generals de les investigacions en lA realitzades fins al moment. Posteriorment, es descriuen les diferents aplicacions de la lA en I'educació dirigides als alumnes en tasques formatives i instructives, i als professors en tasques de disseny i planificació de les activitats docents. L'article acaba amb una reflexió sobre les tendéncies futures de la lA aplicada a I'educació.
Resumo:
The purpose of the research is to define practical profit which can be achieved using neural network methods as a prediction instrument. The thesis investigates the ability of neural networks to forecast future events. This capability is checked on the example of price prediction during intraday trading on stock market. The executed experiments show predictions of average 1, 2, 5 and 10 minutes’ prices based on data of one day and made by two different types of forecasting systems. These systems are based on the recurrent neural networks and back propagation neural nets. The precision of the predictions is controlled by the absolute error and the error of market direction. The economical effectiveness is estimated by a special trading system. In conclusion, the best structures of neural nets are tested with data of 31 days’ interval. The best results of the average percent of profit from one transaction (buying + selling) are 0.06668654, 0.188299453, 0.349854787 and 0.453178626, they were achieved for prediction periods 1, 2, 5 and 10 minutes. The investigation can be interesting for the investors who have access to a fast information channel with a possibility of every-minute data refreshment.