952 resultados para reasoning about loops
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non--well--founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation (called a bisimulation). We present an automation of coinductive theorem proving. This automation is based on the idea of proof planning. Proof planning constructs the higher level steps in a proof, using knowledge of the general structure of a family of proofs and exploiting this knowledge to control the proof search. Part of proof planning involves the use of failure information to modify the plan by the use of a proof critic which exploits the information gained from the failed proof attempt. Our approach to the problem was to develop a strategy that makes an initial simple guess at a bisimulation and then uses generalisation techniques, motivated by a critic, to refine this guess, so that a larger class of coinductive problems can be automatically verified. The implementation of this strategy has focused on the use of coinduction to prove the equivalence of programs in a small lazy functional language which is similar to Haskell. We have developed a proof plan for coinduction and a critic associated with this proof plan. These have been implemented in CoClam, an extended version of Clam with encouraging results. The planner has been successfully tested on a number of theorems.
Resumo:
Coinduction is a method of growing importance in reasoning about functional languages, due to the increasing prominence of lazy data structures. Through the use of bisimulations and proofs that bisimilarity is a congruence in various domains it can be used to prove the congruence of two processes. A coinductive proof requires a relation to be chosen which can be proved to be a bisimulation. We use proof planning to develop a heuristic method which automatically constucts a candidate relation. If this relation doesn't allow the proof to go through a proof critic analyses the reasons why it failed and modifies the relation accordingly. Several proof tools have been developed to aid coinductive proofs but all require user interaction. Crucially they require the user to supply an appropriate relation which the system can then prove to be a bisimulation.
Resumo:
Relatório de estágio para obtenção de grau mestre em Educação pré-escolar e Ensino do 1.º ciclo do ensino básico
Resumo:
Relatório de estágio para obtenção de grau mestre em Educação pré-escolar e Ensino do 1.º ciclo do ensino básico
Resumo:
Base rate neglect on the mammography problem can be overcome by explicitly presenting a causal basis for the typically vague false-positive statistic. One account of this causal facilitation effect is that people make probabilistic judgements over intuitive causal models parameterized with the evidence in the problem. Poorly defined or difficult-to-map evidence interferes with this process, leading to errors in statistical reasoning. To assess whether the construction of parameterized causal representations is an intuitive or deliberative process, in Experiment 1 we combined a secondary load paradigm with manipulations of the presence or absence of an alternative cause in typical statistical reasoning problems. We found limited effects of a secondary load, no evidence that information about an alternative cause improves statistical reasoning, but some evidence that it reduces base rate neglect errors. In Experiments 2 and 3 where we did not impose a load, we observed causal facilitation effects. The amount of Bayesian responding in the causal conditions was impervious to the presence of a load (Experiment 1) and to the precise statistical information that was presented (Experiment 3). However, we found less Bayesian responding in the causal condition than previously reported. We conclude with a discussion of the implications of our findings and the suggestion that there may be population effects in the accuracy of statistical reasoning.
Resumo:
People often struggle when making Bayesian probabilistic estimates on the basis of competing sources of statistical evidence. Recently, Krynski and Tenenbaum (Journal of Experimental Psychology: General, 136, 430–450, 2007) proposed that a causal Bayesian framework accounts for peoples’ errors in Bayesian reasoning and showed that, by clarifying the causal relations among the pieces of evidence, judgments on a classic statistical reasoning problem could be significantly improved. We aimed to understand whose statistical reasoning is facilitated by the causal structure intervention. In Experiment 1, although we observed causal facilitation effects overall, the effect was confined to participants high in numeracy. We did not find an overall facilitation effect in Experiment 2 but did replicate the earlier interaction between numerical ability and the presence or absence of causal content. This effect held when we controlled for general cognitive ability and thinking disposition. Our results suggest that clarifying causal structure facilitates Bayesian judgments, but only for participants with sufficient understanding of basic concepts in probability and statistics.
Resumo:
This study looks at how upper secondary school teachers gender stereotype aspects of students' mathematical reasoning. Girls were attributed gender symbols including insecurity, use of standard methods and imitative reasoning. Boys were assigned the symbols such as multiple strategies especially on the calculator, guessing and chance-taking.
Resumo:
The subject of the text is the issue of the "political", which is defined as the nature and level of the final judgment and ultimate reasoning. The issues of this kind of the "political" has been attempted to distinguish in political sciences. The text focuses on: (1) the scientist as an agent for the final judgment and reasoning, (2) the subject of study of political science, (3) "theoretical strategies" in the science of politics. The latter problem has been discussed mainly on the example of Polish political science. Discussed were among others: (1) "the dilemma of scale", (2) limited operational capacity (methodological and theoretical), (3) aesthetic imagery of political life, (4) structural ignorance in the field of ontology, epistemology and methodology.
Resumo:
This talk proceeds from the premise that IR should engage in a more substantial dialogue with cognitive science. After all, how users decide relevance, or how they chose terms to modify a query are processes rooted in human cognition. Recently, there has been a growing literature applying quantum theory (QT) to model cognitive phenomena. This talk will survey recent research, in particular, modelling interference effects in human decision making. One aspect of QT will be illustrated - how quantum entanglement can be used to model word associations in human memory. The implications of this will be briefly discussed in terms of a new approach for modelling concept combinations. Tentative links to human adductive reasoning will also be drawn. The basic theme behind this talk is QT can potentially provide a new genre of information processing models (including search) more aligned with human cognition.
Three primary school students’ cognition about 3D rotation in a virtual reality learning environment
Resumo:
This paper reports on three primary school students’ explorations of 3D rotation in a virtual reality learning environment (VRLE) named VRMath. When asked to investigate if you would face the same direction when you turn right 45 degrees first then roll up 45 degrees, or when you roll up 45 degrees first then turn right 45 degrees, the students found that the different order of the two turns ended up with different directions in the VRLE. This was contrary to the students’ prior predictions based on using pen, paper and body movements. The findings of this study showed the difficulty young children have in perceiving and understanding the non-commutative nature of 3D rotation and the power of the computational VRLE in giving students experiences that they rarely have in real life with 3D manipulations and 3D mental movements.
Resumo:
AIMS This paper reports on the implementation of a research project that trials an educational strategy implemented over six months of an undergraduate third year nursing curriculum. This project aims to explore the effectiveness of ‘think aloud’ as a strategy for learning clinical reasoning for students in simulated clinical settings. BACKGROUND Nurses are required to apply and utilise critical thinking skills to enable clinical reasoning and problem solving in the clinical setting [1]. Nursing students are expected to develop and display clinical reasoning skills in practice, but may struggle articulating reasons behind decisions about patient care. For students learning to manage complex clinical situations, teaching approaches are required that make these instinctive cognitive processes explicit and clear [2-5]. In line with professional expectations, nursing students in third year at Queensland University of Technology (QUT) are expected to display clinical reasoning skills in practice. This can be a complex proposition for students in practice situations, particularly as the degree of uncertainty or decision complexity increases [6-7]. The ‘think aloud’ approach is an innovative learning/teaching method which can create an environment suitable for developing clinical reasoning skills in students [4, 8]. This project aims to use the ‘think aloud’ strategy within a simulation context to provide a safe learning environment in which third year students are assisted to uncover cognitive approaches that best assist them to make effective patient care decisions, and improve their confidence, clinical reasoning and active critical reflection on their practice. MEHODS In semester 2 2011 at QUT, third year nursing students will undertake high fidelity simulation, some for the first time commencing in September of 2011. There will be two cohorts for strategy implementation (group 1= use think aloud as a strategy within the simulation, group 2= not given a specific strategy outside of nursing assessment frameworks) in relation to problem solving patient needs. Students will be briefed about the scenario, given a nursing handover, placed into a simulation group and an observer group, and the facilitator/teacher will run the simulation from a control room, and not have contact (as a ‘teacher’) with students during the simulation. Then debriefing will occur as a whole group outside of the simulation room where the session can be reviewed on screen. The think aloud strategy will be described to students in their pre-simulation briefing and allow for clarification of this strategy at this time. All other aspects of the simulations remain the same, (resources, suggested nursing assessment frameworks, simulation session duration, size of simulation teams, preparatory materials). RESULTS Methodology of the project and the challenges of implementation will be the focus of this presentation. This will include ethical considerations in designing the project, recruitment of students and implementation of a voluntary research project within a busy educational curriculum which in third year targets 669 students over two campuses. CONCLUSIONS In an environment of increasingly constrained clinical placement opportunities, exploration of alternate strategies to improve critical thinking skills and develop clinical reasoning and problem solving for nursing students is imperative in preparing nurses to respond to changing patient needs. References 1. Lasater, K., High-fidelity simulation and the development of clinical judgement: students' experiences. Journal of Nursing Education, 2007. 46(6): p. 269-276. 2. Lapkin, S., et al., Effectiveness of patient simulation manikins in teaching clinical reasoning skills to undergraduate nursing students: a systematic review. Clinical Simulation in Nursing, 2010. 6(6): p. e207-22. 3. Kaddoura, M.P.C.M.S.N.R.N., New Graduate Nurses' Perceptions of the Effects of Clinical Simulation on Their Critical Thinking, Learning, and Confidence. The Journal of Continuing Education in Nursing, 2010. 41(11): p. 506. 4. Banning, M., The think aloud approach as an educational tool to develop and assess clinical reasoning in undergraduate students. Nurse Education Today, 2008. 28: p. 8-14. 5. Porter-O'Grady, T., Profound change:21st century nursing. Nursing Outlook, 2001. 49(4): p. 182-186. 6. Andersson, A.K., M. Omberg, and M. Svedlund, Triage in the emergency department-a qualitative study of the factors which nurses consider when making decisions. Nursing in Critical Care, 2006. 11(3): p. 136-145. 7. O'Neill, E.S., N.M. Dluhy, and C. Chin, Modelling novice clinical reasoning for a computerized decision support system. Journal of Advanced Nursing, 2005. 49(1): p. 68-77. 8. Lee, J.E. and N. Ryan-Wenger, The "Think Aloud" seminar for teaching clinical reasoning: a case study of a child with pharyngitis. J Pediatr Health Care, 1997. 11(3): p. 101-10.
Resumo:
Theme Paper for Curriculum innovation and enhancement theme AIM: This paper reports on a research project that trialled an educational strategy implemented in an undergraduate nursing curriculum. The project aimed to explore the effectiveness of ‘think aloud’ as a strategy for improving clinical reasoning for students in simulated clinical settings. BACKGROUND: Nurses are required to apply and utilise critical thinking skills to enable clinical reasoning and problem solving in the clinical setting (Lasater, 2007). Nursing students are expected to develop and display clinical reasoning skills in practice, but may struggle articulating reasons behind decisions about patient care. The ‘think aloud’ approach is an innovative learning/teaching method which can create an environment suitable for developing clinical reasoning skills in students (Banning, 2008, Lee and Ryan-Wenger, 1997). This project used the ‘think aloud’ strategy within a simulation context to provide a safe learning environment in which third year students were assisted to uncover cognitive approaches to assist in making effective patient care decisions, and improve their confidence, clinical reasoning and active critical reflection about their practice. MEHODS: In semester 2 2011 at QUT, third year nursing students undertook high fidelity simulation (some for the first time), commencing in September of 2011. There were two cohorts for strategy implementation (group 1= used think aloud as a strategy within the simulation, group 2= no specific strategy outside of nursing assessment frameworks used by all students) in relation to problem solving patient needs. The think aloud strategy was described to students in their pre-simulation briefing and allowed time for clarification of this strategy. All other aspects of the simulations remained the same, (resources, suggested nursing assessment frameworks, simulation session duration, size of simulation teams, preparatory materials). Ethics approval has been obtained for this project. RESULTS: Results of a qualitative analysis (in progress- will be completed by March 2012) of student and facilitator reports on students’ ability to meet the learning objectives of solving patient problems using clinical reasoning and experience with the ‘think aloud’ method will be presented. A comparison of clinical reasoning learning outcomes between the two groups will determine the effect on clinical reasoning for students responding to patient problems. CONCLUSIONS: In an environment of increasingly constrained clinical placement opportunities, exploration of alternate strategies to improve critical thinking skills and develop clinical reasoning and problem solving for nursing students is imperative in preparing nurses to respond to changing patient needs.