4 resultados para Analogy Reasoning
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
This Portfolio of Exploration (PoE) tracks a transformative learning developmental journey that is directed at changing meaning making structures and mental models within an innovation practice. The explicit purpose of the Portfolio is to develop new and different perspectives that enable the handling of new and more complex phenomena through self transformation and increased emotional intelligence development. The Portfolio provides a response to the question: ‘What are the key determinants that enable a Virtual Team (VT) to flourish where flourishing means developing and delivering on the firm’s innovative imperatives?’ Furthermore, the PoE is structured as an investigation into how higher order meaning making promotes ‘entrepreneurial services’ within an intra-firm virtual team, with a secondary aim to identify how reasoning about trust influence KGPs to exchange knowledge. I have developed a framework which specifically focuses on the effectiveness of any firms’ Virtual Team (VT) through transforming the meaning making of the VT participants. I hypothesized it is the way KGPs make meaning (reasoning about trust) which differentiates the firm as a growing firm in the sense of Penrosean resources: ‘inducement to expand and a limit of expansion’ (1959). Reasoning about trust is used as a higher order meaning-making concept in line with Kegan’s (1994) conception of complex meaning making, which is the combining of ideas and data in ways that transform meaning and implicates participants to find new ways of knowledge generation. Simply, it is the VT participants who develop higher order meaning making that hold the capabilities to transform the firm from within, providing a unique competitive advantage that enables the firm to grow.
Resumo:
A notable feature of the surveillance case law of the European Court of Human Rights has been the tendency of the Court to focus on the “in accordance with the law” aspect of the Article 8 ECHR inquiry. This focus has been the subject of some criticism, but the impact of this approach on the manner in which domestic surveillance legislation has been formulated in the Party States has received little scholarly attention. This thesis addresses that gap in the literature through its consideration of the Interception of Postal Packets and Telecommunications Messages (Regulation) Act, 1993 and the Criminal Justice (Surveillance) Act, 2009. While both Acts provide several of the safeguards endorsed by the European Court of Human Rights, this thesis finds that they suffer from a number of crucial weaknesses that undermine the protection of privacy. This thesis demonstrates how the focus of the European Court of Human Rights on the “in accordance with the law” test has resulted in some positive legislative change. Notwithstanding this fact, it is maintained that the legality approach has gained prominence at the expense of a full consideration of the “necessary in a democratic society” inquiry. This has resulted in superficial legislative responses at the domestic level, including from the Irish government. Notably, through the examination of a number of more recent cases, this project discerns a significant alteration in the interpretive approach adopted by the European Court of Human Rights regarding the application of the necessity test. The implications of this development are considered and the outlook for Irish surveillance legislation is assessed.
Resumo:
In decision making problems where we need to choose a particular decision or alternative from a set of possible choices, we often have some preferences which determine if we prefer one decision over another. When these preferences give us an ordering on the decisions that is complete, then it is easy to choose the best or one of the best decisions. However it often occurs that the preferences relation is partially ordered, and we have no best decision. In this thesis, we look at what happens when we have such a partial order over a set of decisions, in particular when we have multiple orderings on a set of decisions, and we present a framework for qualitative decision making. We look at the different natural notions of optimal decision that occur in this framework, which gives us different optimality classes, and we examine the relationships between these classes. We then look in particular at a qualitative preference relation called Sorted-Pareto Dominance, which is an extension of Pareto Dominance, and we give a semantics for this relation as one that is compatible with any order-preserving mapping of an ordinal preference scale to a numerical one. We apply Sorted-Pareto dominance to a Soft Constraints setting, where we solve problems in which the soft constraints associate qualitative preferences to decisions in a decision problem. We also examine the Sorted-Pareto dominance relation in the context of our qualitative decision making framework, looking at the relevant optimality classes for the Sorted-Pareto case, which gives us classes of decisions that are necessarily optimal, and optimal for some choice of mapping of an ordinal scale to a quantitative one. We provide some empirical analysis of Sorted-Pareto constraints problems and examine the optimality classes that result.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.