9 resultados para Account manager

em Cambridge University Engineering Department Publications Database


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Effective dialogue management is critically dependent on the information that is encoded in the dialogue state. In order to deploy reinforcement learning for policy optimization, dialogue must be modeled as a Markov Decision Process. This requires that the dialogue statemust encode all relevent information obtained during the dialogue prior to that state. This can be achieved by combining the user goal, the dialogue history, and the last user action to form the dialogue state. In addition, to gain robustness to input errors, dialogue must be modeled as a Partially Observable Markov Decision Process (POMDP) and hence, a distribution over all possible states must be maintained at every dialogue turn. This poses a potential computational limitation since there can be a very large number of dialogue states. The Hidden Information State model provides a principled way of ensuring tractability in a POMDP-based dialogue model. The key feature of this model is the grouping of user goals into partitions that are dynamically built during the dialogue. In this article, we extend this model further to incorporate the notion of complements. This allows for a more complex user goal to be represented, and it enables an effective pruning technique to be implemented that preserves the overall system performance within a limited computational resource more effectively than existing approaches. © 2011 ACM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The nature of the relationship between information technology (IT) and organizations has been a long-standing debate in the Information Systems literature. Does IT shape organizations, or do people in organisations control how IT is used? To formulate the question a little differently: does agency (the capacity to make a difference) lie predominantly with machines (computer systems) or humans (organisational actors)? Many proposals for a middle way between the extremes of technological and social determinism have been put advanced; in recent years researchers oriented towards social theories have focused on structuration theory and (lately) actor network theory. These two theories, however, adopt different and incompatible views of agency. Thus, structuration theory sees agency as exclusively a property of humans, whereas the principle of general symmetry in actor network theory implies that machines may also be agents. Drawing on critiques of both structuration theory and actor network theory, this paper develops a theoretical account of the interaction between human and machine agency: the double dance of agency. The account seeks to contribute to theorisation of the relationship between technology and organisation by recognizing both the different character of human and machine agency, and the emergent properties of their interplay.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method for VVER-1000 fuel rearrangement optimization that takes into account both cladding durability and fuel burnup and which is suitable for any regime of normal reactor operation has been established. The main stages involved in solving the problem of fuel rearrangement optimization are discussed in detail. Using the proposed fuel rearrangement efficiency criterion, a simple example VVER-1000 fuel rearrangement optimization problem is solved under deterministic and uncertain conditions. It is shown that the deterministic and robust (in the face of uncertainty) solutions of the rearrangement optimization problem are similar in principle, but the robust solution is, as might be anticipated, more conservative. © 2013 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A partially observable Markov decision process (POMDP) has been proposed as a dialog model that enables automatic optimization of the dialog policy and provides robustness to speech understanding errors. Various approximations allow such a model to be used for building real-world dialog systems. However, they require a large number of dialogs to train the dialog policy and hence they typically rely on the availability of a user simulator. They also require significant designer effort to hand-craft the policy representation. We investigate the use of Gaussian processes (GPs) in policy modeling to overcome these problems. We show that GP policy optimization can be implemented for a real world POMDP dialog manager, and in particular: 1) we examine different formulations of a GP policy to minimize variability in the learning process; 2) we find that the use of GP increases the learning rate by an order of magnitude thereby allowing learning by direct interaction with human users; and 3) we demonstrate that designer effort can be substantially reduced by basing the policy directly on the full belief space thereby avoiding ad hoc feature space modeling. Overall, the GP approach represents an important step forward towards fully automatic dialog policy optimization in real world systems. © 2013 IEEE.