879 resultados para Deductive Reasoning
Resumo:
Most associative memory models perform one level mapping between predefined sets of input and output patterns1 and are unable to represent hierarchical knowledge. Complex AI systems allow hierarchical representation of concepts, but generally do not have learning capabilities. In this paper, a memory model is proposed which forms concept hierarchy by learning sample relations between concepts. All concepts are represented in a concept layer. Relations between a concept and its defining lower level concepts, are chunked as cognitive codes represented in a coding layer. By updating memory contents in the concept layer through code firing in the coding layer, the system is able to perform an important class of commonsense reasoning, namely recognition and inheritance.
Resumo:
This Portfolio of Exploration (PoE) tracks a transformative learning developmental journey that is directed at changing meaning making structures and mental models within an innovation practice. The explicit purpose of the Portfolio is to develop new and different perspectives that enable the handling of new and more complex phenomena through self transformation and increased emotional intelligence development. The Portfolio provides a response to the question: ‘What are the key determinants that enable a Virtual Team (VT) to flourish where flourishing means developing and delivering on the firm’s innovative imperatives?’ Furthermore, the PoE is structured as an investigation into how higher order meaning making promotes ‘entrepreneurial services’ within an intra-firm virtual team, with a secondary aim to identify how reasoning about trust influence KGPs to exchange knowledge. I have developed a framework which specifically focuses on the effectiveness of any firms’ Virtual Team (VT) through transforming the meaning making of the VT participants. I hypothesized it is the way KGPs make meaning (reasoning about trust) which differentiates the firm as a growing firm in the sense of Penrosean resources: ‘inducement to expand and a limit of expansion’ (1959). Reasoning about trust is used as a higher order meaning-making concept in line with Kegan’s (1994) conception of complex meaning making, which is the combining of ideas and data in ways that transform meaning and implicates participants to find new ways of knowledge generation. Simply, it is the VT participants who develop higher order meaning making that hold the capabilities to transform the firm from within, providing a unique competitive advantage that enables the firm to grow.
Resumo:
A notable feature of the surveillance case law of the European Court of Human Rights has been the tendency of the Court to focus on the “in accordance with the law” aspect of the Article 8 ECHR inquiry. This focus has been the subject of some criticism, but the impact of this approach on the manner in which domestic surveillance legislation has been formulated in the Party States has received little scholarly attention. This thesis addresses that gap in the literature through its consideration of the Interception of Postal Packets and Telecommunications Messages (Regulation) Act, 1993 and the Criminal Justice (Surveillance) Act, 2009. While both Acts provide several of the safeguards endorsed by the European Court of Human Rights, this thesis finds that they suffer from a number of crucial weaknesses that undermine the protection of privacy. This thesis demonstrates how the focus of the European Court of Human Rights on the “in accordance with the law” test has resulted in some positive legislative change. Notwithstanding this fact, it is maintained that the legality approach has gained prominence at the expense of a full consideration of the “necessary in a democratic society” inquiry. This has resulted in superficial legislative responses at the domestic level, including from the Irish government. Notably, through the examination of a number of more recent cases, this project discerns a significant alteration in the interpretive approach adopted by the European Court of Human Rights regarding the application of the necessity test. The implications of this development are considered and the outlook for Irish surveillance legislation is assessed.
Resumo:
In decision making problems where we need to choose a particular decision or alternative from a set of possible choices, we often have some preferences which determine if we prefer one decision over another. When these preferences give us an ordering on the decisions that is complete, then it is easy to choose the best or one of the best decisions. However it often occurs that the preferences relation is partially ordered, and we have no best decision. In this thesis, we look at what happens when we have such a partial order over a set of decisions, in particular when we have multiple orderings on a set of decisions, and we present a framework for qualitative decision making. We look at the different natural notions of optimal decision that occur in this framework, which gives us different optimality classes, and we examine the relationships between these classes. We then look in particular at a qualitative preference relation called Sorted-Pareto Dominance, which is an extension of Pareto Dominance, and we give a semantics for this relation as one that is compatible with any order-preserving mapping of an ordinal preference scale to a numerical one. We apply Sorted-Pareto dominance to a Soft Constraints setting, where we solve problems in which the soft constraints associate qualitative preferences to decisions in a decision problem. We also examine the Sorted-Pareto dominance relation in the context of our qualitative decision making framework, looking at the relevant optimality classes for the Sorted-Pareto case, which gives us classes of decisions that are necessarily optimal, and optimal for some choice of mapping of an ordinal scale to a quantitative one. We provide some empirical analysis of Sorted-Pareto constraints problems and examine the optimality classes that result.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
This study focuses on those substantial changes that characterize the shift of Vietnam’s macroeconomic structures and evolution of micro-structural interaction over an important period of 1991-2008. The results show that these events are completely distinct in terms of (i) Economic nature; (ii) Scale and depth of changes; (iii) Start and end results; and, (iv) Requirement for macroeconomic decisions. The study rejected a suspicion of similarity between the contagion of the Asian financial crisis in 1997-98 and economic chaos in the first half of 2008 (starting from late 2007). The depth, economic settings of, and interconnection between macro choices and micro decisions have all grown up significantly, partly due to a much deeper level of integration of Vietnam into the world’s economy. On the one hand, this phenomenon gives rise to efficiency of macro level policies because the consideration of micro-structural factors within the framework has definitely become increasingly critical. On the other and, this is a unique opportunity for the macroeconomic mechanism of Vietnam to improve vastly, given the context in which the national economy entered an everchanging period under pressures of globalization and re-integration. The authors hope to also open up paths for further empirical verifications and to stress on the fact that macro policies will have, from now on, to be decided in line with changing micro-settings, which specify a market economy and decide the degree of success of any macroeconomic choices.
Resumo:
BACKGROUND: A hierarchical taxonomy of organisms is a prerequisite for semantic integration of biodiversity data. Ideally, there would be a single, expansive, authoritative taxonomy that includes extinct and extant taxa, information on synonyms and common names, and monophyletic supraspecific taxa that reflect our current understanding of phylogenetic relationships. DESCRIPTION: As a step towards development of such a resource, and to enable large-scale integration of phenotypic data across vertebrates, we created the Vertebrate Taxonomy Ontology (VTO), a semantically defined taxonomic resource derived from the integration of existing taxonomic compilations, and freely distributed under a Creative Commons Zero (CC0) public domain waiver. The VTO includes both extant and extinct vertebrates and currently contains 106,947 taxonomic terms, 22 taxonomic ranks, 104,736 synonyms, and 162,400 cross-references to other taxonomic resources. Key challenges in constructing the VTO included (1) extracting and merging names, synonyms, and identifiers from heterogeneous sources; (2) structuring hierarchies of terms based on evolutionary relationships and the principle of monophyly; and (3) automating this process as much as possible to accommodate updates in source taxonomies. CONCLUSIONS: The VTO is the primary source of taxonomic information used by the Phenoscape Knowledgebase (http://phenoscape.org/), which integrates genetic and evolutionary phenotype data across both model and non-model vertebrates. The VTO is useful for inferring phenotypic changes on the vertebrate tree of life, which enables queries for candidate genes for various episodes in vertebrate evolution.
Resumo:
Presentamos algunos resultados de una investigación más amplia cuyo objetivo general es describir y caracterizar el razonamiento inductivo que utilizan estudiantes de tercero y cuarto de Secundaria al resolver tareas relacionadas con sucesiones lineales y cuadráticas (Cañadas, 2007). Identificamos diferencias en el empleo de algunos de los pasos considerados para la descripción del razonamiento inductivo en la resolución de dos de los seis problemas planteados a los estudiantes. Describimos estas diferencias y las analizamos en función de las características de los problemas.
Resumo:
The detailed study of difficulties and errors in young learner comprehension is a relevant and productive research field in Mathematics Education. Studies in the field are numerous although somewhat too varied. The present paper is suggesting methodological perspectives and principles applying to the field of research; we also show an example with school work. The use of figurate numbers as a representation system gives richer conceptual values, boosts visual reasoning and facilitates learner understanding.
Resumo:
We present an analysis of the inductive reasoning of twelve Spanish secondary students in a mathematical problem-solving context. Students were interviewed while they worked on two different problems. Based on Polya´s steps and Reid’s stages for a process of inductive reasoning, we propose a more precise categorization for analyzing this kind of reasoning in our particular context. In this paper we present some results of a wider investigation (Cañadas, 2002).
Resumo:
In this paper we present an analysis of the inductive reasoning of twelve secondary students in a mathematical problem-solving context. Students were proposed to justify what is the result of adding two even numbers. Starting from the theoretical framework, which is based on Pólya’s stages of inductive reasoning, and our empirical work, we created a category system that allowed us to make a qualitative data analysis. We show in this paper some of the results obtained in a previous study.
Resumo:
Guest editorial
Resumo:
This paper describes the architecture of the case based reasoning (CBR) component of Smartfire, a fire field modelling tool for use by members of the Fire Safety Engineering community who are not expert in modelling techniques. The CBR system captures the qualitative reasoning of an experienced modeller in the assessment of room geometries so as to set up the important initial parameters of the problem. The system relies on two important reasoning principles obtained from the expert: 1) there is a natural hierarchical retrieval mechanism which may be employed; and 2) much of the reasoning on a qualitative level is linear in nature, although the computational solution of the problem is non-linear. The paper describes the qualitative representation of geometric room information on which the system is based, and the principles on which the CBR system operates.
Resumo:
This paper describes the approach to the modelling of experiential knowledge in an industrial application of Case-Based Reasoning (CBR). The CBR involves retrieval techniques in conjunction with a relational database. The database is especially designed as a repository of experiential knowledge, and includes qualitative search indices. The system is intended to help design engineers and material engineers in the submarine cable industry. It consists of three parts: a materials database; a database of experiential knowledge; and a CBR system used to retrieve similar past designs based upon component and material qualitative descriptions. The system is currently undergoing user testing at the Alcatel Submarine Networks site in Greenwich.
Resumo:
This paper describes the architecture of the knowledge based system (KBS) component of Smartfire, a fire field modelling tool for use by members of the fire safety engineering community who are not expert in modelling techniques. The KBS captures the qualitative reasoning of an experienced modeller in the assessment of room geometries, so as to set up the important initial parameters of the problem. Fire modelling expertise is an example of geometric and spatial reasoning, which raises representational problems. The approach taken in this project is a qualitative representation of geometric room information based on Forbus’ concept of a metric diagram. This takes the form of a coarse grid, partitioning the domain in each of the three spatial dimensions. Inference over the representation is performed using a case-based reasoning (CBR) component. The CBR component stores example partitions with key set-up parameters; this paper concentrates on the key parameter of grid cell distribution.