849 resultados para planning (artificial intelligence)
Resumo:
Revisin del problema de la filosofa de la Inteligencia Artificial a la vista del Equilibrio refractivo. La revisin del problema se lleva a cabo para mostrar como "pueden pensar las mquinas?" slo se ha evaluado en los terminos humanos. El equilibrio refractivo se plantea como una herramienta para definir conceptos de tal modo que la experiencia y los preceptos se encuentren en equilibrio, para con l construir una definicin de pensar que no est limitada exclusivamente a "pensar tal y como lo hacen los humanos".
Resumo:
El WACC o Coste Medio Ponderado de Capital es la tasa a la que se deben descontar los flujos para evaluar un proyecto o empresa. Para calcular esta tasa es necesario determinar el costo de la deuda y el costo de los recursos propios de la compaa; el costo de la deuda es la tasa actual del mercado que la empresa est pagando por su deuda, sin embargo el costo de los recursos propios podra ser difcil y ms complejo de estimar ya que no existe un costo explcito. En este trabajo se presenta un panorama de las teoras propuestas a lo largo de la historia para calcular el costo de los recursos propios. Como caso particular, se estimar el costo de los recursos propios sin apalancamiento financiero de seis empresas francesas que no cotizan en bolsa y pertenecientes al sector de Servicios a la Persona (SAP). Para lograr lo anterior, se utilizar el Proceso de Anlisis Jerrquico (AHP) y el Modelo de Valoracin del Precio de los Activos Financieros (CAPM) con base en lo presentado por Martha Pachn (2013) en Modelo alternativo para calcular el costo de los recursos propios.
Resumo:
In this paper, we employ techniques from artificial intelligence such as reinforcement learning and agent based modeling as building blocks of a computational model for an economy based on conventions. First we model the interaction among firms in the private sector. These firms behave in an information environment based on conventions, meaning that a firm is likely to behave as its neighbors if it observes that their actions lead to a good pay off. On the other hand, we propose the use of reinforcement learning as a computational model for the role of the government in the economy, as the agent that determines the fiscal policy, and whose objective is to maximize the growth of the economy. We present the implementation of a simulator of the proposed model based on SWARM, that employs the SARSA() algorithm combined with a multilayer perceptron as the function approximation for the action value function.
Resumo:
En este texto se presentan algunos conceptos y marcos tericos tiles para el anlisis del trabajo en ergonoma. El objetivo es mostrar los conceptos de base para el estudio del trabajo en la tradicin de la ergonoma de la actividad, y analizar de manera general algunos de los modelos empleados para el anlisis de una actividad de trabajo. Inicialmente se abordan los principios tericos de la ergonoma y los principios que provienen de la fisiologa, la biomecnica, la psicologa y la sociologa; tambin se presentan los acercamientos metodolgicos empleados en esta misma perspectiva para el anlisis de actividades de trabajo. Se parte del principio de que un estudio ergonmico del trabajo se puede llevar a cabo desde una doble perspectiva: la perspectiva analtica y la perspectiva comprensiva.
Resumo:
La coordinaci i assignaci de tasques en entorns distributs ha estat un punt important de la recerca en els ltims anys i aquests temes sn el cor dels sistemes multi-agent. Els agents en aquests sistemes necessiten cooperar i considerar els altres agents en les seves accions i decisions. A ms a ms, els agents han de coordinar-se ells mateixos per complir tasques complexes que necessiten ms d'un agent per ser complerta. Aquestes tasques poden ser tan complexes que els agents poden no saber la ubicaci de les tasques o el temps que resta abans de que les tasques quedin obsoletes. Els agents poden necessitar utilitzar la comunicaci amb l'objectiu de conixer la tasca en l'entorn, en cas contrari, poden perdre molt de temps per trobar la tasca dins de l'escenari. De forma similar, el procs de presa de decisions distribut pot ser encara ms complexa si l'entorn s dinmic, amb incertesa i en temps real. En aquesta dissertaci, considerem entorns amb sistemes multi-agent amb restriccions i cooperatius (dinmics, amb incertesa i en temps real). En aquest sentit es proposen dues aproximacions que permeten la coordinaci dels agents. La primera s un mecanisme semi-centralitzat basat en tcniques de subhastes combinatries i la idea principal es minimitzar el cost de les tasques assignades des de l'agent central cap als equips d'agents. Aquest algoritme t en compte les preferncies dels agents sobre les tasques. Aquestes preferncies estan incloses en el bid enviat per l'agent. La segona s un aproximaci d'scheduling totalment descentralitzat. Aix permet als agents assignar les seves tasques tenint en compte les preferncies temporals sobre les tasques dels agents. En aquest cas, el rendiment del sistema no noms depn de la maximitzaci o del criteri d'optimitzaci, sin que tamb depn de la capacitat dels agents per adaptar les seves assignacions eficientment. Addicionalment, en un entorn dinmic, els errors d'execuci poden succeir a qualsevol pla degut a la incertesa i error de accions individuals. A ms, una part indispensable d'un sistema de planificaci s la capacitat de re-planificar. Aquesta dissertaci tamb proveeix una aproximaci amb re-planificaci amb l'objectiu de permetre als agent re-coordinar els seus plans quan els problemes en l'entorn no permeti la execuci del pla. Totes aquestes aproximacions s'han portat a terme per permetre als agents assignar i coordinar de forma eficient totes les tasques complexes en un entorn multi-agent cooperatiu, dinmic i amb incertesa. Totes aquestes aproximacions han demostrat la seva eficincia en experiments duts a terme en l'entorn de simulaci RoboCup Rescue.
Resumo:
In this paper we describe how we generated written explanations to indirect users of a knowledge-based system in the domain of drug prescription. We call indirect users the intended recipients of explanations, to distinguish them from the prescriber (the direct user) who interacts with the system. The Explanation Generator was designed after several studies about indirect users' information needs and physicians' explanatory attitudes in this domain. It integrates text planning techniques with ATN-based surface generation. A double modeling component enables adapting the information content, order and style to the indirect user to whom explanation is addressed. Several examples of computer-generated texts are provided, and they are contrasted with the physicians' explanations to discuss advantages and limits of the approach adopted.
Resumo:
In this article, we provide an initial insight into the study of MI and what it means for a machine to be intelligent. We discuss how MI has progressed to date and consider future scenarios in a realistic and logical way as much as possible. To do this, we unravel one of the major stumbling blocks to the study of MI, which is the field that has become widely known as "artificial intelligence"
Resumo:
In this paper, practical generation of identification keys for biological taxa using a multilayer perceptron neural network is described. Unlike conventional expert systems, this method does not require an expert for key generation, but is merely based on recordings of observed character states. Like a human taxonomist, its judgement is based on experience, and it is therefore capable of generalized identification of taxa. An initial study involving identification of three species of Iris with greater than 90% confidence is presented here. In addition, the horticulturally significant genus Lithops (Aizoaceae/Mesembryanthemaceae), popular with enthusiasts of succulent plants, is used as a more practical example, because of the difficulty of generation of a conventional key to species, and the existence of a relatively recent monograph. It is demonstrated that such an Artificial Neural Network Key (ANNKEY) can identify more than half (52.9%) of the species in this genus, after training with representative data, even though data for one character is completely missing.
Resumo:
Deception-detection is the crux of Turings experiment to examine machine thinking conveyed through a capacity to respond with sustained and satisfactory answers to unrestricted questions put by a human interrogator. However, in 60 years to the month since the publication of Computing Machinery and Intelligence little agreement exists for a canonical format for Turings textual game of imitation, deception and machine intelligence. This research raises from the trapped mine of philosophical claims, counter-claims and rebuttals Turings own distinct five minutes question-answer imitation game, which he envisioned practicalised in two different ways: a) A two-participant, interrogator-witness viva voce, b) A three-participant, comparison of a machine with a human both questioned simultaneously by a human interrogator. Using Loebners 18th Prize for Artificial Intelligence contest, and Colby et al.s 1972 transcript analysis paradigm, this research practicalised Turings imitation game with over 400 human participants and 13 machines across three original experiments. Results show that, at the current state of technology, a deception rate of 8.33% was achieved by machines in 60 human-machine simultaneous comparison tests. Results also show more than 1 in 3 Reviewers succumbed to hidden interlocutor misidentification after reading transcripts from experiment 2. Deception-detection is essential to uncover the increasing number of malfeasant programmes, such as CyberLover, developed to steal identity and financially defraud users in chatrooms across the Internet. Practicalising Turings two tests can assist in understanding natural dialogue and mitigate the risk from cybercrime.
Resumo:
Navigation is a broad topic that has been receiving considerable attention from the mobile robotic community over the years. In order to execute autonomous driving in outdoor urban environments it is necessary to identify parts of the terrain that can be traversed and parts that should be avoided. This paper describes an analyses of terrain identification based on different visual information using a MLP artificial neural network and combining responses of many classifiers. Experimental tests using a vehicle and a video camera have been conducted in real scenarios to evaluate the proposed approach.
Resumo:
When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Planning to reach a goal is an essential capability for rational agents. In general, a goal specifies a condition to be achieved at the end of the plan execution. In this article, we introduce nondeterministic planning for extended reachability goals (i.e., goals that also specify a condition to be preserved during the plan execution). We show that, when this kind of goal is considered, the temporal logic CTL turns out to be inadequate to formalize plan synthesis and plan validation algorithms. This is mainly due to the fact that the CTL`s semantics cannot discern among the various actions that produce state transitions. To overcome this limitation, we propose a new temporal logic called alpha-CTL. Then, based on this new logic, we implement a planner capable of synthesizing reliable plans for extended reachability goals, as a side effect of model checking.
Resumo:
AI planning systems tend to be disembodied and are not situated within the environment for which plans are generated, thus losing information concerning the interaction between the system and its environment. This paper argues that such information may potentially be valuable in constraining plan formulation, and presents both an agent- and domainindependent architecture that extends the classical AI planning framework to take into account context, or the interaction between an autonomous situated planning agent and its environment. The paper describes how context constrains the goals an agent might generate, enables those goals to be prioritised, and constrains plan selection.
Resumo:
Artificial Intelligence techniques are applied to improve performance of a simulated oil distillation system. The chosen system was a debutanizer column. At this process, the feed, which comes to the column, is segmented by heating. The lightest components become steams, by forming the LPG (Liquefied Petroleum Gas). The others components, C5+, continue liquid. In the composition of the LPG, ideally, we have only propane and butanes, but, in practice, there are contaminants, for example, pentanes. The objective of this work is to control pentane amount in LPG, by means of intelligent set points (SP s) determination for PID controllers that are present in original instrumentation (regulatory control) of the column. A fuzzy system will be responsible for adjusting the SP's, driven by the comparison between the molar fraction of the pentane present in the output of the plant (LPG) and the desired amount. However, the molar fraction of pentane is difficult to measure on-line, due to constraints such as: long intervals of measurement, high reliability and low cost. Therefore, an inference system was used, based on a multilayer neural network, to infer the pentane molar fraction through secondary variables of the column. Finally, the results shown that the proposed control system were able to control the value of pentane molar fraction under different operational situations
Resumo:
Artificial neural networks are dynamic systems consisting of highly interconnected and parallel nonlinear processing elements. Systems based on artificial neural networks have high computational rates due to the use of a massive number of these computational elements. Neural networks with feedback connections provide a computing model capable of solving a rich class of optimization problems. In this paper, a modified Hopfield network is developed for solving problems related to operations research. The internal parameters of the network are obtained using the valid-subspace technique. Simulated examples are presented as an illustration of the proposed approach. Copyright (C) 2000 IFAC.