7 resultados para tobin’s Q
Resumo:
[EN] This study analyzes the relationship between board size and economic-financial performance in a sample of European firms that constitute the EUROSTOXX50 Index. Based on previous literature, resource dependency and agency theories, and considering regulation developed by the OECD and European Union on the normative of corporate governance for each country in the sample, the authors propose the hypotheses of both positive linear and quadratic relationships between the researched parameters. Using ROA as a benchmark of financial performance and the number of members of the board as measurement of the board size, two OLS estimations are performed. To confirm the robustness of the results the empirical study is tested with two other similar financial ratios, ROE and Tobin s Q. Due to the absence of significant results, an additional factor, firm size, is employed in order to check if it affects firm performance. Delving further into the nature of this relationship, it is revealed that there exists a strong and negative relation between firm size and financial performance. Consequently, it can be asseverated that the generic recommendation one size fits all cannot be applied in this case; which conforms to the Recommendations of the European Union that dissuade using generic models for all countries.
Resumo:
243 p. : il.
Resumo:
186 p. : il.
Resumo:
The aim of this paper is to propose a new solution for the roommate problem with strict preferences. We introduce the solution of maximum irreversibility and consider almost stable matchings (Abraham et al. [2])and maximum stable matchings (Ta [30] [32]). We find that almost stable matchings are incompatible with the other two solutions. Hence, to solve the roommate problem we propose matchings that lie at the intersection of the maximum irreversible matchings and maximum stable matchings, which are called Q-stable matchings. These matchings are core consistent and we offer an effi cient algorithm for computing one of them. The outcome of the algorithm belongs to an absorbing set.
Resumo:
We consider the quanti fied constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quanti fication, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results.
Resumo:
This paper is devoted to the investigation of nonnegative solutions and the stability and asymptotic properties of the solutions of fractional differential dynamic linear time-varying systems involving delayed dynamics with delays. The dynamic systems are described based on q-calculus and Caputo fractional derivatives on any order.
Resumo:
Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.