875 resultados para Machine Learning Robotics Artificial Intelligence Bayesian Networks


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RoboCup was created in 1996 by a group of Japanese, American, and European Artificial Intelligence and Robotics researchers with a formidable, visionary long-term challenge: “By 2050 a team of robot soccer players will beat the human World Cup champion team.” At that time, in the mid 90s, when there were very few effective mobile robots and the Honda P2 humanoid robot was presented to a stunning public for the first time also in 1996, the RoboCup challenge, set as an adversarial game between teams of autonomous robots, was fascinating and exciting. RoboCup enthusiastically and concretely introduced three robot soccer leagues, namely “Simulation,” “Small-Size,” and “Middle-Size,” as we explain below, and organized its first competitions at IJCAI’97 in Nagoya with a surprising number of 100 participants [RC97]. It was the beginning of what became a continously growing research community. RoboCup established itself as a structured organization (the RoboCup Federation www.RoboCup.org). RoboCup fosters annual competition events, where the scientific challenges faced by the researchers are addressed in a setting that is attractive also to the general public. and the RoboCup events are the ones most popular and attended in the research fields of AI and Robotics.RoboCup further includes a technical symposium with contributions relevant to the RoboCup competitions and beyond to the general AI and robotics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine learning, inductive logic programming, search

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La intel·ligència d’eixams és una branca de la intel·ligència artificial que està agafant molta força en els últims temps, especialment en el camp de la robòtica. En aquest projecte estudiarem el comportament social sorgit de les interaccions entre un nombre determinat de robots autònoms en el camp de la neteja de grans superfícies. Un cop triat un escenari i un robot que s’ajustin als requeriments del projecte, realitzarem una sèrie de simulacions a partir de diferents polítiques de cerca que ens permetran avaluar el comportament dels robots per unes condicions inicials de distribució dels robots i zones a netejar. A partir dels resultats obtinguts serem capaços de determinar quina configuració genera millors resultats.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Etiologic research in psychiatry relies on an objectivist epistemology positing that human cognition is specified by the "reality" of the outer world, which consists of a totality of mind-independent objects. Truth is considered as some sort of correspondence relation between words and external objects, and mind as a mirror of nature. In our view, this epistemology considerably impedes etiologic research. Objectivist epistemology has been recently confronting a growing critique from diverse scientific fields. Alternative models in neurosciences (neuronal selection), artificial intelligence (connectionism), and developmental psychology (developmental biodynamics) converge in viewing living organisms as self-organizing systems. In this perspective, the organism is not specified by the outer world, but enacts its environment by selecting relevant domains of significance that constitute its world. The distinction between mind and body or organism and environment is a matter of observational perspective. These models from empirical sciences are compatible with fundamental tenets of philosophical phenomenology and hermeneutics. They imply consequences for research in psychopathology: symptoms cannot be viewed as disconnected manifestations of discrete localized brain dysfunctions. Psychopathology should therefore focus on how the person's self-coherence is maintained and on the understanding and empirical investigation of the systemic laws that govern neurodevelopment and the organization of human cognition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Creació d'un joc del tipus Arcade beat'em up en 2D utilitzant escenaris amb una certa profunditat de moviment i dotant als personatges no jugadors i altres objectes d'Intel·ligència Artificial de manera que el seu comportament no sigui sempre lineal i aprofitant-ho per afegir nivells de dificultat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reinforcement learning (RL) is a very suitable technique for robot learning, as it can learn in unknown environments and in real-time computation. The main difficulties in adapting classic RL algorithms to robotic systems are the generalization problem and the correct observation of the Markovian state. This paper attempts to solve the generalization problem by proposing the semi-online neural-Q_learning algorithm (SONQL). The algorithm uses the classic Q_learning technique with two modifications. First, a neural network (NN) approximates the Q_function allowing the use of continuous states and actions. Second, a database of the most representative learning samples accelerates and stabilizes the convergence. The term semi-online is referred to the fact that the algorithm uses the current but also past learning samples. However, the algorithm is able to learn in real-time while the robot is interacting with the environment. The paper shows simulated results with the "mountain-car" benchmark and, also, real results with an underwater robot in a target following behavior

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desarrollo de un robot seguidor de líneas, en el que se implementan diversas soluciones de las áreas de sistemas embebidos e inteligencia artificial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel filtering method for multispectral satellite image classification. The proposed method learns a set of spatial filters that maximize class separability of binary support vector machine (SVM) through a gradient descent approach. Regularization issues are discussed in detail and a Frobenius-norm regularization is proposed to efficiently exclude uninformative filters coefficients. Experiments carried out on multiclass one-against-all classification and target detection show the capabilities of the learned spatial filters.