964 resultados para distributed cognition theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manuscript 1: “Conceptual Analysis: Externalizing Nursing Knowledge” We use concept analysis to establish that the report tool nurses prepare, carry, reference, amend, and use as a temporary data repository are examples of cognitive artifacts. This tool, integrally woven throughout the work and practice of nurses, is important to cognition and clinical decision-making. Establishing the tool as a cognitive artifact will support new dimensions of study. Such studies can characterize how this report tool supports cognition, internal representation of knowledge and skills, and external representation of knowledge of the nurse. Manuscript 2: “Research Methods: Exploring Cognitive Work” The purpose of this paper is to describe a complex, cross-sectional, multi-method approach to study of personal cognitive artifacts in the clinical environment. The complex data arrays present in these cognitive artifacts warrant the use of multiple methods of data collection. Use of a less robust research design may result in an incomplete understanding of the meaning, value, content, and relationships between personal cognitive artifacts in the clinical environment and the cognitive work of the user. Manuscript 3: “Making the Cognitive Work of Registered Nurses Visible” Purpose: Knowledge representations and structures are created and used by registered nurses to guide patient care. Understanding is limited regarding how these knowledge representations, or cognitive artifacts, contribute to working memory, prioritization, organization, cognition, and decision-making. The purpose of this study was to identify and characterize the role a specific cognitive artifact knowledge representation and structure as it contributed to the cognitive work of the registered nurse. Methods: Data collection was completed, using qualitative research methods, by shadowing and interviewing 25 registered nurses. Data analysis employed triangulation and iterative analytic processes. Results: Nurse cognitive artifacts support recall, data evaluation, decision-making, organization, and prioritization. These cognitive artifacts demonstrated spatial, longitudinal, chronologic, visual, and personal cues to support the cognitive work of nurses. Conclusions: Nurse cognitive artifacts are an important adjunct to the cognitive work of nurses, and directly support patient care. Nurses need to be able to configure their cognitive artifact in ways that are meaningful and support their internal knowledge representations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

分布式认知理论通过协调人机对话,结合人和计算机各自的优势解决问题,在人机交互研究中扮演了指导者的角色.尽管分布式认知理论支持的资源模型在分析人机交互时取得了成功,但模型存在不能提供复杂用户任务支持、缺乏对模型中元素的准确定义等问题,在一定程度上导致了表现形式上的混乱.使用分布式认知理论构造了扩展资源模型,建立人机交互活动中的动作和表征之间的联系,从而指导界面的设计和实现.扩展资源模型从静态结构和交互策略两个方面对界面交互动作提供支持,在交互中减少人的认知负担.该研究对设计符合人的认知特点的界面具有一定的指导作用.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Effective knowledge sharing underpins the day-to-day work activities in knowledge-intensive organizational environments. This paper integrates key concepts from the literature towards a model to explain effective knowledge sharing in such environments. It is proposed that the effectiveness of knowledge sharing is determined by the maturity of informal and formal social networks and a shared information and knowledge-based artefact network (AN) in a particular work context. It is further proposed that facilitating mechanisms within the social and ANs, and mechanisms that link these networks, affect the overall efficiency of knowledge sharing in complex environments. Three case studies are used to illustrate the model, highlighting typical knowledge-sharing problems that result when certain model elements are absent or insufficient in a particular environment. The model is discussed in terms of diagnosing knowledge-sharing problems, organizational knowledge strategy, and the role of information and communication technology in knowledge sharing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Frameworks such as activity theory, distributed cognition and structuration theory, amongst others, have shown that detailed study of contextual settings where users work (or live) can help the design of interactive systems. However, these frameworks do not adequately focus on accounting for the materiality (and embodiment) of the contextual settings. Within the IST-EU funded AMIDA project (Augmented Multiparty Interaction with Distance Access) we are looking into supporting meeting practices with distance access. Meetings are inherently embodied in everyday work life and that material artefacts associated with meeting practices play a critical role in their formation. Our eventual goal is to develop a deeper understanding of the dynamic and embodied nature of meeting practices and designing technologies to support these. In this paper we introduce the notion of "artefact ecologies" as a conceptual base for understanding embodied meeting practices with distance access. Artefact ecologies refer to a system consisting of different digital and physical artefacts, people, their work practices and values and lays emphasis on the role artefacts play in embodiment, work coordination and supporting remote awareness. In the end we layout our plans for designing technologies for supporting embodied meeting practices within the AMIDA project.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to analyze and develop various forms of abduction as a means of conceptualizing processes of discovery. Abduction was originally presented by Charles S. Peirce (1839-1914) as a "weak", third main mode of inference -- besides deduction and induction -- one which, he proposed, is closely related to many kinds of cognitive processes, such as instincts, perception, practices and mediated activity in general. Both abduction and discovery are controversial issues in philosophy of science. It is often claimed that discovery cannot be a proper subject area for conceptual analysis and, accordingly, abduction cannot serve as a "logic of discovery". I argue, however, that abduction gives essential means for understanding processes of discovery although it cannot give rise to a manual or algorithm for making discoveries. In the first part of the study, I briefly present how the main trend in philosophy of science has, for a long time, been critical towards a systematic account of discovery. Various models have, however, been suggested. I outline a short history of abduction; first Peirce's evolving forms of his theory, and then later developments. Although abduction has not been a major area of research until quite recently, I review some critiques of it and look at the ways it has been analyzed, developed and used in various fields of research. Peirce's own writings and later developments, I argue, leave room for various subsequent interpretations of abduction. The second part of the study consists of six research articles. First I treat "classical" arguments against abduction as a logic of discovery. I show that by developing strategic aspects of abductive inference these arguments can be countered. Nowadays the term 'abduction' is often used as a synonym for the Inference to the Best Explanation (IBE) model. I argue, however, that it is useful to distinguish between IBE ("Harmanian abduction") and "Hansonian abduction"; the latter concentrating on analyzing processes of discovery. The distinctions between loveliness and likeliness, and between potential and actual explanations are more fruitful within Hansonian abduction. I clarify the nature of abduction by using Peirce's distinction between three areas of "semeiotic": grammar, critic, and methodeutic. Grammar (emphasizing "Firstnesses" and iconicity) and methodeutic (i.e., a processual approach) especially, give new means for understanding abduction. Peirce himself held a controversial view that new abductive ideas are products of an instinct and an inference at the same time. I maintain that it is beneficial to make a clear distinction between abductive inference and abductive instinct, on the basis of which both can be developed further. Besides these, I analyze abduction as a part of distributed cognition which emphasizes a long-term interaction with the material, social and cultural environment as a source for abductive ideas. This approach suggests a "trialogical" model in which inquirers are fundamentally connected both to other inquirers and to the objects of inquiry. As for the classical Meno paradox about discovery, I show that abduction provides more than one answer. As my main example of abductive methodology, I analyze the process of Ignaz Semmelweis' research on childbed fever. A central basis for abduction is the claim that discovery is not a sequence of events governed only by processes of chance. Abduction treats those processes which both constrain and instigate the search for new ideas; starting from the use of clues as a starting point for discovery, but continuing in considerations like elegance and 'loveliness'. The study then continues a Peircean-Hansonian research programme by developing abduction as a way of analyzing processes of discovery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Concept mapping is a technique for visualizing the relationships between different concepts, and collaborative concept mapping is used to model knowledge and transfer expert knowledge. Because of lacking some features,existing systems can’t support collaborative concept mapping effectively. In this paper, we analysis the collaborative concept mapping process according to the theory of distributed cognition, and argue the functions effective systems ought to include. A collaborative concept mapping system should have the following features: visualization of concept map, flexible collaboration style,supporting natural interaction, knowledge management and history management. Furthermore, we describe every feature in details. Finally,a prototype system has been built to fully explore the above technologies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study is to explore the nature and how of leadership in Irish post-primary schools. It considers school leadership within the context of contemporary Distributed Leadership theory. Associated concepts such as Distributed Cognition and Activity Theory are used to frame the study. From a distributed perspective, it is now widely accepted that other agents (e.g. teachers) have a leadership role, as part of collaborative, participative and supportive learning communities. Thus, this study considers how principals interact and build leadership capacity throughout the school. The study draws on two main sources of evidence. In analysing the implications of accountability agendas for school leadership, there is an exploration and focus on the conceptualisations of school leadership that are fore-grounded in 21 WSE reports. Elements of Critical Discourse Analysis are employed as an investigative tool to decipher how the construction of leadership practice is produced. The second prong of the study explores leadership in 3 case-study post-primary schools. Leadership is a complex phenomenon and not easy to describe. The findings clarify, however, that school leadership is a construct beyond the scope of the principal alone. While there is widespread support for a distributed model of leadership, the concept does not explicitly form part of the discourse in the case-study schools. It is also evident that any attempt to understand leadership practice must connect local interpretations with broader discourses. The understanding and practice of leadership is best understood in its sociohistorical context. The study reveals that, in the Irish post-primary school, the historical dimension is very influential, while the situational setting, involving a particular set of agents and agendas, strongly shapes thinking and practices. This study is novel as it synthesises two key sources of evidence. It is of great value in that it teases out the various historical and situational aspects to enhance understandings of school leadership in contemporary Ireland. It raises important questions for policy, practice and further research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le sujet de la gestion du risque m’a toujours interpelée, surtout après que j’ai vécu deux ouragans et un tremblement de terre dévastateurs au Salvador. Bien qu’on ait assez écrit sur le sujet en le reliant souvent aux changements climatiques, on ne sait pas comment les organisations gouvernementales et civiles vivent cette gestion du risque au quotidien. À partir d’une étude ethnographique de la Commission de la protection civile de la Mairie de Tecoluca au Salvador, j’observais les processus qui se mettent en place dans la recherche et l’analyse des facteurs structuraux causant les situations de vulnérabilité. Pour ce faire, j’adoptais une approche basée sur l’étude des interactions, mobilisant les théories de la cognition distribuée et de l’acteur réseau. Comme je le montre, la gestion du risque, vue comme un processus participatif, se caractérise, d’une part, par la coopération et la coordination entre les personnes et, d’autre part, par la contribution d’outils, de technologies, de documents et de méthodes contribuant à la détection de risques. Ceci exige la mobilisation de connaissances qui doivent être produites, partagées et distribuées entre les membres d’un groupe à travers les divers artéfacts, outils, méthodes et technologies qu’ils mobilisent et qui les mobilisent. À ce sujet, la théorie de la cognition distribuée permet d’explorer des interactions qui se produisent au sein d’un groupe de travail en se focalisant sur ce qui contribue à l’acte de connaitre, conçu comme une activité non pas seulement individuelle, mais surtout collective et distribuée. Par ailleurs, la théorie de l’acteur-réseau me permet, quant à elle, de montrer comment dans l’exécution de cette tâche (la gestion du risque), la contribution active d’acteurs non humains, tant en soi qu’en relations avec les acteurs humains, participe de l’activité de détection et de prévention du risque.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En este texto se presentan algunos conceptos y marcos teóricos útiles para el análisis del trabajo en ergonomía. El objetivo es mostrar los conceptos de base para el estudio del trabajo en la tradición de la ergonomía de la actividad, y analizar de manera general algunos de los modelos empleados para el análisis de una actividad de trabajo. Inicialmente se abordan los principios teóricos de la ergonomía y los principios que provienen de la fisiología, la biomecánica, la psicología y la sociología; también se presentan los acercamientos metodológicos empleados en esta misma perspectiva para el análisis de actividades de trabajo. Se parte del principio de que un estudio ergonómico del trabajo se puede llevar a cabo desde una doble perspectiva: la perspectiva analítica y la perspectiva comprensiva.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cognitive skills programmes for offenders such as Reasoning and Rehabilitation (R & R) have been around now for over 20 years and were developed in part to address their poor reasoning and decision-making skills. In this paper we critically examine the theoretical underpinnings of the R & R programme in light of current theoretical developments and research from cognitive neuroscience, philosophy, biology, and psychology. After considering recent theoretical and empirical research on rationality, emotions, distributed cognition, and embodiment we conclude with some thoughts about how to fine-tune cognitive skills programmes such as R & R in light of this research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multi-camera on-site video technology and post-lesson video stimulated interviews were used in a purposefully inclusive research design to generate a complex data set amenable to parallel analyses from several complementary theoretical perspectives. The symposium reports the results of parallel analyses employing positioning theory, systemic functional linguistics, distributed cognition and representational analysis of the same nine-lesson sequence in a single science classroom during the teaching of a single topic: States of Matter. Without contesting the coherence and value of a well-constructed mono-theoretic research study, the argument is made that all such studies present an inevitably partial account of a setting as complex as the science classroom: privileging some aspects and ignoring others. In this symposium, the first presentation examined the rationale for multi-theoretic research designs, highlighting the dangers of the circular amplification of those constructs predetermined by the choice of theory and outlining the intended benefits of multi-theoretic designs that offer less partial accounts of classroom practice. The second and third presentations reported the results of analyses of the same lesson sequence on the topic “states of matter” using the analytical perspectives of positioning theory and systemic functional linguistics. The final presentation reported the comparative analysis of student learning of density over the same three lessons from distributed cognition and representational perspectives. The research design promoted a form of reciprocal interrogation, where the analyses provided insights into classroom practice and the comparison of the analyses facilitated the reflexive interrogation of the selected theories, while also optimally anticipating the subsequent synthesis of the interpretive accounts generated by each analysis of the same setting for the purpose of informing instructional advocacy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reasoning under uncertainty is a human capacity that in software system is necessary and often hidden. Argumentation theory and logic make explicit non-monotonic information in order to enable automatic forms of reasoning under uncertainty. In human organization Distributed Cognition and Activity Theory explain how artifacts are fundamental in all cognitive process. Then, in this thesis we search to understand the use of cognitive artifacts in an new argumentation framework for an agent-based artificial society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Negotiating boundaries: from state of affairs to matter of transit. The research deals with the everyday management of spatial uncertainty, starting with the wider historical question of terrains vagues (a French term for wastelands, dismantled areas and peripheral city voids, or interstitial spaces) and focusing later on a particular case study. The choice intended to privilege a small place (a mouth of a lagoon which crosses a beach), with ordinary features, instead of the esthetical “vague terrains”, often witnessed through artistic media or architectural reflections. This place offered the chance to explore a particular dimension of indeterminacy, mostly related with a certain kind of phenomenal instability of its limits, the hybrid character of its cultural status (neither natural, nor artificial) and its crossover position as a transitional space, between different tendencies and activities. The first theoretical part of the research develops a semiotic of vagueness, by taking under exam the structuralist idea of relation, in order to approach an interpretive notion of continuity and indeterminacy. This exploration highlights the key feature of actantial network distribution, which provides a bridge with the second methodological parts, dedicated to a “tuning” of the tools for the analysis. This section establishes a dialogue with current social sciences (like Actor-Network Theory, Situated action and Distributed Cognition), in order to define some observational methods for the documentation of social practices, which could be comprised in a semiotic ethnography framework. The last part, finally, focuses on the mediation and negotiation by which human actors are interacting with the varying conditions of the chosen environment, looking at people’s movements through space, their embodied dealings with the boundaries and the use of spatial artefacts as framing infrastructure of the site.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.