2 resultados para Markov Chain

em Massachusetts Institute of Technology


Relevância:

60.00% 60.00%

Publicador:

Resumo:

One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a model and analysis of a synchronous tandem flow line that produces different part types on unreliable machines. The machines operate according to a static priority rule, operating on the highest priority part whenever possible, and operating on lower priority parts only when unable to produce those with higher priorities. We develop a new decomposition method to analyze the behavior of the manufacturing system by decomposing the long production line into small analytically tractable components. As a first step in modeling a production line with more than one part type, we restrict ourselves to the case where there are two part types. Detailed modeling and derivations are presented with a small two-part-type production line that consists of two processing machines and two demand machines. Then, a generalized longer flow line is analyzed. Furthermore, estimates for performance measures, such as average buffer levels and production rates, are presented and compared to extensive discrete event simulation. The quantitative behavior of the two-part type processing line under different demand scenarios is also provided.