16 resultados para Learning Through Making

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is investigate the role of conversation in strategic change so as to enhance both theory and practice in this respect. As an investigation on how conversations shape change processes in practice, we reflect on an interpretive case study in a health care organization. Through an OD project complemented by semi-structured interviews with participants, we gained a set of data and experiences that allows us to inquire into the relationship between conversations and change in more depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conversation is central to the process of organizational learning and change. Drawing on the notion of reflective conversation, we describe an action research project, "learning through listening" in Omega, a residential healthcare organization. In this project, service users, staff, members of management committees, trustees, managers, and central office staff participated in listening to each other and in working together towards building capacity for creating their own vision of how the organization could move into the future, according to its values and ethos. In doing so they developed ways of engaging in reflective conversation that enabled progress towards a strategic direction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Operating room (OR) team safety training and learning in the field of dialysis access is well suited for the use of simulators, simulated case learning and root cause analysis of adverse outcomes. The objectives of OR team training are to improve communication and leadership skills, to use checklists and to prevent errors. Other objectives are to promote a change in the attitudes towards vascular access from learning through mistakes in a nonpunitive environment, to positively impact the employee performance and to increase staff retention by making the workplace safer, more efficient and user friendly.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

BACKGROUND E-learning and blended learning approaches gain more and more popularity in emergency medicine curricula. So far, little data is available on the impact of such approaches on procedural learning and skill acquisition and their comparison with traditional approaches. OBJECTIVE This study investigated the impact of a blended learning approach, including Web-based virtual patients (VPs) and standard pediatric basic life support (PBLS) training, on procedural knowledge, objective performance, and self-assessment. METHODS A total of 57 medical students were randomly assigned to an intervention group (n=30) and a control group (n=27). Both groups received paper handouts in preparation of simulation-based PBLS training. The intervention group additionally completed two Web-based VPs with embedded video clips. Measurements were taken at randomization (t0), after the preparation period (t1), and after hands-on training (t2). Clinical decision-making skills and procedural knowledge were assessed at t0 and t1. PBLS performance was scored regarding adherence to the correct algorithm, conformance to temporal demands, and the quality of procedural steps at t1 and t2. Participants' self-assessments were recorded in all three measurements. RESULTS Procedural knowledge of the intervention group was significantly superior to that of the control group at t1. At t2, the intervention group showed significantly better adherence to the algorithm and temporal demands, and better procedural quality of PBLS in objective measures than did the control group. These aspects differed between the groups even at t1 (after VPs, prior to practical training). Self-assessments differed significantly only at t1 in favor of the intervention group. CONCLUSIONS Training with VPs combined with hands-on training improves PBLS performance as judged by objective measures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Making research relevant to development is a complex, non-linear and often unpredictable process which requires very particular skills and strategies on the part of researchers. The National Centre of Competence in Research (NCCR) North-South provides financial and technical support for researchers so that they can effectively cooperate with policy-makers and practitioners. An analysis of 10 years of experience translating research into development practise in the NCCR North-South revealed the following four strategies as particularly relevant: a) research orientation towards the needs and interests of partners; b) implementation of promising methods and approaches; c) communication and dissemination of research results; and d) careful analysis of the political context through monitoring and learning approaches. The NCCR North-South experience shows that “doing excellent research” is just one piece of the mosaic. It is equally important to join hands with non-academic partners from the very beginning of a research project, in order to develop and test new pathways for sustainable development. Capacity building – in the North and South – enables researchers to do both: To do excellent research and to make it relevant for development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Training a system to recognize handwritten words is a task that requires a large amount of data with their correct transcription. However, the creation of such a training set, including the generation of the ground truth, is tedious and costly. One way of reducing the high cost of labeled training data acquisition is to exploit unlabeled data, which can be gathered easily. Making use of both labeled and unlabeled data is known as semi-supervised learning. One of the most general versions of semi-supervised learning is self-training, where a recognizer iteratively retrains itself on its own output on new, unlabeled data. In this paper we propose to apply semi-supervised learning, and in particular self-training, to the problem of cursive, handwritten word recognition. The special focus of the paper is on retraining rules that define what data are actually being used in the retraining phase. In a series of experiments it is shown that the performance of a neural network based recognizer can be significantly improved through the use of unlabeled data and self-training if appropriate retraining rules are applied.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The twenty-first century has seen a further dramatic increase in the use of quantitative knowledge for governing social life after its explosion in the 1980s. Indicators and rankings play an increasing role in the way governmental and non-governmental organizations distribute attention, make decisions, and allocate scarce resources. Quantitative knowledge promises to be more objective and straightforward as well as more transparent and open for public debate than qualitative knowledge, thus producing more democratic decision-making. However, we know little about the social processes through which this knowledge is constituted nor its effects. Understanding how such numeric knowledge is produced and used is increasingly important as proliferating technologies of quantification alter modes of knowing in subtle and often unrecognized ways. This book explores the implications of the global multiplication of indicators as a specific technology of numeric knowledge production used in governance. Combination of insights from anthropology of law, history of science, science and technology studies, sociology of quantification, economics and geography will appeal to those who are uncomfortable with the separation between 'theoretical' and 'empirical' approaches and with the current weakness of critique that address the main trends shaping the relations between capitalism, markets, law and democracy Theoretical discussion of the nature and historical formation of quantification will appeal to those who ask questions such as, 'What is new or different about our contemporary reliance on quantitative knowledge?' Groundbreaking empirical case studies uncover the social work and politics that often go into the making of indicators and explore the far-reaching effects and impacts of these numerical representations in specific settings

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The new computing paradigm known as cognitive computing attempts to imitate the human capabilities of learning, problem solving, and considering things in context. To do so, an application (a cognitive system) must learn from its environment (e.g., by interacting with various interfaces). These interfaces can run the gamut from sensors to humans to databases. Accessing data through such interfaces allows the system to conduct cognitive tasks that can support humans in decision-making or problem-solving processes. Cognitive systems can be integrated into various domains (e.g., medicine or insurance). For example, a cognitive system in cities can collect data, can learn from various data sources and can then attempt to connect these sources to provide real time optimizations of subsystems within the city (e.g., the transportation system). In this study, we provide a methodology for integrating a cognitive system that allows data to be verbalized, making the causalities and hypotheses generated from the cognitive system more understandable to humans. We abstract a city subsystem—passenger flow for a taxi company—by applying fuzzy cognitive maps (FCMs). FCMs can be used as a mathematical tool for modeling complex systems built by directed graphs with concepts (e.g., policies, events, and/or domains) as nodes and causalities as edges. As a verbalization technique we introduce the restriction-centered theory of reasoning (RCT). RCT addresses the imprecision inherent in language by introducing restrictions. Using this underlying combinatorial design, our approach can handle large data sets from complex systems and make the output understandable to humans.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Libraries of learning objects may serve as basis for deriving course offerings that are customized to the needs of different learning communities or even individuals. Several ways of organizing this course composition process are discussed. Course composition needs a clear understanding of the dependencies between the learning objects. Therefore we discuss the metadata for object relationships proposed in different standardization projects and especially those suggested in the Dublin Core Metadata Initiative. Based on these metadata we construct adjacency matrices and graphs. We show how Gozinto-type computations can be used to determine direct and indirect prerequisites for certain learning objects. The metadata may also be used to define integer programming models which can be applied to support the instructor in formulating his specifications for selecting objects or which allow a computer agent to automatically select learning objects. Such decision models could also be helpful for a learner navigating through a library of learning objects. We also sketch a graph-based procedure for manual or automatic sequencing of the learning objects.