893 resultados para mundane reasoning
Resumo:
Recent research on novice programmers has suggested that they pass through neo-Piagetian stages: sensorimotor, preoperational, and concrete operational stages, before eventually reaching programming competence at the formal operational stage. This paper presents empirical results in support of this neo-Piagetian perspective. The major novel contributions of this paper are empirical results for some exam questions aimed at testing novices for the concrete operational abilities to reason with quantities that are conserved, processes that are reversible, and properties that hold under transitive inference. While the questions we used had been proposed earlier by Lister, he did not present any data for how students performed on these questions. Our empirical results demonstrate that many students struggle to answer these problems, despite the apparent simplicity of these problems. We then compare student performance on these questions with their performance on six explain in plain English questions.
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
A qualitative think aloud study of the early Neo-Piagetian stages of reasoning in novice programmers
Resumo:
Recent research indicates that some of the difficulties faced by novice programmers are manifested very early in their learning. In this paper, we present data from think aloud studies that demonstrate the nature of those difficulties. In the think alouds, novices were required to complete short programming tasks which involved either hand executing ("tracing") a short piece of code, or writing a single sentence describing the purpose of the code. We interpret our think aloud data within a neo-Piagetian framework, demonstrating that some novices reason at the sensorimotor and preoperational stages, not at the higher concrete operational stage at which most instruction is implicitly targeted.
Resumo:
Theme Paper for Curriculum innovation and enhancement theme AIM: This paper reports on a research project that trialled an educational strategy implemented in an undergraduate nursing curriculum. The project aimed to explore the effectiveness of ‘think aloud’ as a strategy for improving clinical reasoning for students in simulated clinical settings. BACKGROUND: Nurses are required to apply and utilise critical thinking skills to enable clinical reasoning and problem solving in the clinical setting (Lasater, 2007). Nursing students are expected to develop and display clinical reasoning skills in practice, but may struggle articulating reasons behind decisions about patient care. The ‘think aloud’ approach is an innovative learning/teaching method which can create an environment suitable for developing clinical reasoning skills in students (Banning, 2008, Lee and Ryan-Wenger, 1997). This project used the ‘think aloud’ strategy within a simulation context to provide a safe learning environment in which third year students were assisted to uncover cognitive approaches to assist in making effective patient care decisions, and improve their confidence, clinical reasoning and active critical reflection about their practice. MEHODS: In semester 2 2011 at QUT, third year nursing students undertook high fidelity simulation (some for the first time), commencing in September of 2011. There were two cohorts for strategy implementation (group 1= used think aloud as a strategy within the simulation, group 2= no specific strategy outside of nursing assessment frameworks used by all students) in relation to problem solving patient needs. The think aloud strategy was described to students in their pre-simulation briefing and allowed time for clarification of this strategy. All other aspects of the simulations remained the same, (resources, suggested nursing assessment frameworks, simulation session duration, size of simulation teams, preparatory materials). Ethics approval has been obtained for this project. RESULTS: Results of a qualitative analysis (in progress- will be completed by March 2012) of student and facilitator reports on students’ ability to meet the learning objectives of solving patient problems using clinical reasoning and experience with the ‘think aloud’ method will be presented. A comparison of clinical reasoning learning outcomes between the two groups will determine the effect on clinical reasoning for students responding to patient problems. CONCLUSIONS: In an environment of increasingly constrained clinical placement opportunities, exploration of alternate strategies to improve critical thinking skills and develop clinical reasoning and problem solving for nursing students is imperative in preparing nurses to respond to changing patient needs.
Resumo:
Identifying the design features that impact construction is essential to developing cost effective and constructible designs. The similarity of building components is a critical design feature that affects method selection, productivity, and ultimately construction cost and schedule performance. However, there is limited understanding of what constitutes similarity in the design of building components and limited computer-based support to identify this feature in a building product model. This paper contributes a feature-based framework for representing and reasoning about component similarity that builds on ontological modelling, model-based reasoning and cluster analysis techniques. It describes the ontology we developed to characterize component similarity in terms of the component attributes, the direction, and the degree of variation. It also describes the generic reasoning process we formalized to identify component similarity in a standard product model based on practitioners' varied preferences. The generic reasoning process evaluates the geometric, topological, and symbolic similarities between components, creates groupings of similar components, and quantifies the degree of similarity. We implemented this reasoning process in a prototype cost estimating application, which creates and maintains cost estimates based on a building product model. Validation studies of the prototype system provide evidence that the framework is general and enables a more accurate and efficient cost estimating process.
Resumo:
This thesis explored the knowledge and reasoning of young children in solving novel statistical problems, and the influence of problem context and design on their solutions. It found that young children's statistical competencies are underestimated, and that problem design and context facilitated children's application of a wide range of knowledge and reasoning skills, none of which had been taught. A qualitative design-based research method, informed by the Models and Modeling perspective (Lesh & Doerr, 2003) underpinned the study. Data modelling activities incorporating picture story books were used to contextualise the problems. Children applied real-world understanding to problem solving, including attribute identification, categorisation and classification skills. Intuitive and metarepresentational knowledge together with inductive and probabilistic reasoning was used to make sense of data, and beginning awareness of statistical variation and informal inference was visible.
Resumo:
This book analyses the structure, form and language of a selected number of international and national legal instruments and reviews how an illustrative range of international and national judicial institutions have responded to the issues before them and the processes of legal reasoning engaged by them in reaching their decisions. This involves a very detailed discussion of these primary sources of international and national environmental law with a view to determining their jurisprudential architecture and the processes of reasoning expected of those responsible for implementing these architectural arrangements. This book is concerned not with the effectiveness or the quality of an environmental legal system but only with its jurisprudential characteristics and their associated processes of legal reasoning.
Resumo:
This paper offers a discussion on the “mundane” or quotidian aspects of that software which might at first glance seem to be a fine example of the extraordinary. It looks at game worlds in terms of an ancient human desire to articulate place in the world and pursues a design concept which resonates with this practice in order to enable a more mundane exploitation of such spatial representations: the claiming of place.
Resumo:
In this research paper, we study a simple programming problem that only requires knowledge of variables and assignment statements, and yet we found that some early novice programmers had difficulty solving the problem. We also present data from think aloud studies which demonstrate the nature of those difficulties. We interpret our data within a neo-Piagetian framework which describes cognitive developmental stages through which students pass as they learn to program. We describe in detail think aloud sessions with novices who reason at the neo-Piagetian preoperational level. Those students exhibit two problems. First, they focus on very small parts of the code and lose sight of the "big picture". Second, they are prone to focus on superficial aspects of the task that are not functionally central to the solution. It is not until the transition into the concrete operational stage that decentration of focus occurs, and they have the cognitive ability to reason about abstract quantities that are conserved, and are equipped to adapt skills to closely related tasks. Our results, and the neo-Piagetian framework on which they are based, suggest that changes are necessary in teaching practice to better support novices who have not reached the concrete operational stage.
Resumo:
John Dewey’s pragmatist aesthetics is used as a conceptual basis for designing new technologies that support staff-members’ mundane social interactions in an academic department. From this perspective, aesthetics is seen as a broader phenomenon that encompasses experiential aspects of staffmembers’ everyday lives and not only a look-&-feel aspect.
Resumo:
In attempting to build intelligent litigation support tools, we have moved beyond first generation, production rule legal expert systems. Our work integrates rule based and case based reasoning with intelligent information retrieval. When using the case based reasoning methodology, or in our case the specialisation of case based retrieval, we need to be aware of how to retrieve relevant experience. Our research, in the legal domain, specifies an approach to the retrieval problem which relies heavily on an extended object oriented/rule based system architecture that is supplemented with causal background information. We use a distributed agent architecture to help support the reasoning process of lawyers. Our approach to integrating rule based reasoning, case based reasoning and case based retrieval is contrasted to the CABARET and PROLEXS architectures which rely on a centralised blackboard architecture. We discuss in detail how our various cooperating agents interact, and provide examples of the system at work. The IKBALS system uses a specialised induction algorithm to induce rules from cases. These rules are then used as indices during the case based retrieval process. Because we aim to build legal support tools which can be modified to suit various domains rather than single purpose legal expert systems, we focus on principles behind developing legal knowledge based systems. The original domain chosen was theAccident Compensation Act 1989 (Victoria, Australia), which relates to the provision of benefits for employees injured at work. For various reasons, which are indicated in the paper, we changed our domain to that ofCredit Act 1984 (Victoria, Australia). This Act regulates the provision of loans by financial institutions. The rule based part of our system which provides advice on the Credit Act has been commercially developed in conjunction with a legal firm. We indicate how this work has lead to the development of a methodology for constructing rule based legal knowledge based systems. We explain the process of integrating this existing commercial rule based system with the case base reasoning and retrieval architecture.
Resumo:
In this paper we discuss the strengths and weaknesses of a range of artificial intelligence approaches used in legal domains. Symbolic reasoning systems which rely on deductive, inductive and analogical reasoning are described and reviewed. The role of statistical reasoning in law is examined, and the use of neural networks analysed. There is discussion of architectures for, and examples of, systems which combine a number of these reasoning strategies. We conclude that to build intelligent legal decision support systems requires a range of reasoning strategies.
Resumo:
Commercial legal expert systems are invariably rule based. Such systems are poor at dealing with open texture and the argumentation inherent in law. To overcome these problems we suggest supplementing rule based legal expert systems with case based reasoning or neural networks. Both case based reasoners and neural networks use cases-but in very different ways. We discuss these differences at length. In particular we examine the role of explanation in existing expert systems methodologies. Because neural networks provide poor explanation facilities, we consider the use of Toulmin argument structures to support explanation (S. Toulmin, 1958). We illustrate our ideas with regard to a number of systems built by the authors