4 resultados para Continuous-time Markov Chain
em Massachusetts Institute of Technology
Resumo:
We develop an extension to the tactical planning model (TPM) for a job shop by the third author. The TPM is a discrete-time model in which all transitions occur at the start of each time period. The time period must be defined appropriately in order for the model to be meaningful. Each period must be short enough so that a job is unlikely to travel through more than one station in one period. At the same time, the time period needs to be long enough to justify the assumptions of continuous workflow and Markovian job movements. We build an extension to the TPM that overcomes this restriction of period sizing by permitting production control over shorter time intervals. We achieve this by deriving a continuous-time linear control rule for a single station. We then determine the first two moments of the production level and queue length for the workstation.
Resumo:
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
Resumo:
One objective of artificial intelligence is to model the behavior of an intelligent agent interacting with its environment. The environment's transformations can be modeled as a Markov chain, whose state is partially observable to the agent and affected by its actions; such processes are known as partially observable Markov decision processes (POMDPs). While the environment's dynamics are assumed to obey certain rules, the agent does not know them and must learn. In this dissertation we focus on the agent's adaptation as captured by the reinforcement learning framework. This means learning a policy---a mapping of observations into actions---based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. The set of policies is constrained by the architecture of the agent's controller. POMDPs require a controller to have a memory. We investigate controllers with memory, including controllers with external memory, finite state controllers and distributed controllers for multi-agent systems. For these various controllers we work out the details of the algorithms which learn by ascending the gradient of expected cumulative reinforcement. Building on statistical learning theory and experiment design theory, a policy evaluation algorithm is developed for the case of experience re-use. We address the question of sufficient experience for uniform convergence of policy evaluation and obtain sample complexity bounds for various estimators. Finally, we demonstrate the performance of the proposed algorithms on several domains, the most complex of which is simulated adaptive packet routing in a telecommunication network.
Resumo:
This paper presents a model and analysis of a synchronous tandem flow line that produces different part types on unreliable machines. The machines operate according to a static priority rule, operating on the highest priority part whenever possible, and operating on lower priority parts only when unable to produce those with higher priorities. We develop a new decomposition method to analyze the behavior of the manufacturing system by decomposing the long production line into small analytically tractable components. As a first step in modeling a production line with more than one part type, we restrict ourselves to the case where there are two part types. Detailed modeling and derivations are presented with a small two-part-type production line that consists of two processing machines and two demand machines. Then, a generalized longer flow line is analyzed. Furthermore, estimates for performance measures, such as average buffer levels and production rates, are presented and compared to extensive discrete event simulation. The quantitative behavior of the two-part type processing line under different demand scenarios is also provided.