6 resultados para Q. liaotungensis


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this paper is to propose a new solution for the roommate problem with strict preferences. We introduce the solution of maximum irreversibility and consider almost stable matchings (Abraham et al. [2])and maximum stable matchings (Ta [30] [32]). We find that almost stable matchings are incompatible with the other two solutions. Hence, to solve the roommate problem we propose matchings that lie at the intersection of the maximum irreversible matchings and maximum stable matchings, which are called Q-stable matchings. These matchings are core consistent and we offer an effi cient algorithm for computing one of them. The outcome of the algorithm belongs to an absorbing set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the quanti fied constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quanti fication, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is devoted to the investigation of nonnegative solutions and the stability and asymptotic properties of the solutions of fractional differential dynamic linear time-varying systems involving delayed dynamics with delays. The dynamic systems are described based on q-calculus and Caputo fractional derivatives on any order.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.