882 resultados para Social Information Processing Theory
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information overlays. In addition, very few of these representations have undergone a thorough analysis or design process with reference to psychological theories on data and process visualization. This dearth of visualization research, we believe, has led to problems with BPM uptake in some organizations, as the representations can be difficult for stakeholders to understand, and thus remains an open research question for the BPM community. In addition, business analysts and process modeling experts themselves need visual representations that are able to assist with key BPM life cycle tasks in the process of generating optimal solutions. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment have much potential in areas of BPM; to engage, provide insight, and to promote collaboration amongst analysts and stakeholders alike. We believe this is a timely topic, with research emerging in a number of places around the globe, relevant to this workshop. This is the second TAProViz workshop being run at BPM. The intention this year is to consolidate on the results of last year's successful workshop by further developing this important topic, identifying the key research topics of interest to the BPM visualization community.
Resumo:
I develop a model of individuals’ intentions to discontinue information system use. Understanding these intentions is important because they give insights into users’ willingness to carry out system tasks, and provide a basis for maintenance decisions as well as possible replacement decisions. I offer a first conceptualization of factors determining users’ discontinuance intentions on basis of existing literature on technology use, status quo bias and dual factor concepts. The model is grounded in rational choice theory to distinguish determinants of a conscious decision between continuing or discontinuing IS use. I provide details on the empirical test of the model through a field study of IS users in a retail organization. The work will have implications for theory on information systems continuance and dual-factor logic in information system use. The empirical findings will provide suggestions for managers dealing with cessation of information systems and work routine changes in organizations.
Resumo:
This thesis examined the extent to which individual differences, as conceptualised by the revised Reinforcement Sensitivity Theory, influenced young drivers' information processing and subsequent acceptance of anti-speeding messages. Using a multi-method approach, the findings highlighted the utility of combining objective measures (a cognitive response time task and electroencephalography) with self-report measures to assess message processing and message acceptance, respectively. This body of research indicated that responses to anti-speeding messages may differ depending on an individual's personality disposition. Overall, the research provided further insight into the development of message strategies to target high risk drivers.
Resumo:
Social marketing by Western governments that use fear tactics and threatening information to promote anti-drinking messages has polarized ‘binge drinking’ and ‘moderate drinking’ through a continuum that implies benefits and harms for both individuals and society. With the goal of extending insights into social marketing approaches that promote safer drinking cultures in Australia, we discuss findings from a study that examines alcohol consumers' moderate-drinking intentions. By applying the theory of planned behaviour and emotions theory, we discuss survey results from a sample of alcohol consumers, which demonstrate that positively framed value propositions that evoke happiness and love are more influential in the processing of an alcohol moderation message for alcohol consumers. The key limitations of this study are the cross-sectional nature of the data and the focal-dependent variable being behavioural intentions rather than behaviours. Research insight into the stronger influence of positive emotions on processing an alcohol moderation message establishes an important avenue for future social marketing communications that moves beyond negative, avoidance appeals to promote behaviour change in drinkers. These research findings will benefit professionals involved in developing social change campaigns that promote and reinforce consumers' positive intentions, with messages about the benefits of controlled, moderate drinking.
Resumo:
Tutkimus käsittelee verkko-opetusinnovaation leviämistä perusasteen ja lukion maantieteeseen vuosina 1998–2004. Työssä sovellettiin opetusinnovaation leviämismallia ja innovaatioiden diffuusioteoriaa. Aineisto hankittiin seitsemänä vuotena kyselylomakkeilla maantieteen verkko-opetuksen edelläkävijäopettajilta, jotka palauttivat 326 lomaketta. Tutkimuksen pääongelmat olivat 1) Millaisia edellytyksiä edelläkävijäopettajilla on käyttää verkko-opetusta koulun maantieteessä? 2) Mitä sovelluksia ja millä tavoin edelläkävijäopettajat käyttävät maantieteen verkko-opetuksessa? 3) Millaisia käyttökokemuksia edelläkävijäopettajat ovat saaneet maantieteen verkko-opetuksesta? Tutkimuksessa havaittiin, että tietokoneiden riittämätön määrä ja puuttuminen aineluokasta vaikeuttivat maantieteen verkko-opetusta. Työssä kehitettiin opettajien digitaalisten mediataitojen kuutiomalli, johon kuuluvat tekniset taidot, informaation prosessointitaidot ja viestintätaidot. Opettajissa erotettiin kolme verkko-opetuksen käyttäjätyyppiä: informaatiohakuiset kevytkäyttäjät, viestintähakuiset peruskäyttäjät ja yhteistyöhakuiset tehokäyttäjät. Verkko-opetukseen liittyi intensiivisiä myönteisiä ja kielteisiä kokemuksia. Se toi iloa ja motivaatiota opiskeluun. Sitä pidettiin rikastuttavana lisänä, joka haluttiin integroida opetukseen hallitusti. Edelläkävijäopettajat ottivat käyttöön tietoverkoissa olevaa informaatiota ja sovelsivat työvälineohjelmia. He pääsivät alkuun todellisuutta jäljittelevien virtuaalimaailmojen: satelliittikuvien toistaman maapallon, digitaalikarttojen ja simulaatioiden käytössä. Opettajat kokeilivat verkon sosiaalisia tiloja reaaliaikaisen viestinnän, keskusteluryhmien ja ryhmätyöohjelmien avulla. Mielikuvitukseen perustuvat virtuaalimaailmat jäivät vähälle sillä opettajat eivät juuri pelanneet viihdepelejä. He omaksuivat virtuaalimaailmoista satunnaisia palasia käytettävissä olevan laite- ja ohjelmavarustuksen mukaan. Virtuaalimaailmojen valtaus eteni tutkimuksen aikana digitaalisen informaation hyödyntämisestä viestintäsovelluksiin ja aloittelevaan yhteistyöhön. Näin opettajat laajensivat virtuaalireviiriään tietoverkkojen dynaamisiksi toimijoiksi ja pääsivät uusin keinoin tyydyttämään ihmisen universaalia tarvetta yhteyteen muiden kanssa. Samalla opettajat valtautuivat informaation kuluttajista sen tuottajiksi, objekteista subjekteiksi. Verkko-opetus avaa koulun maantieteelle huomattavia mahdollisuuksia. Mobiililaitteiden avulla informaatiota voidaan kerätä ja tallentaa maasto-olosuhteissa, ohjelmilla sitä voidaan muuntaa muodosta toiseen. Internetin autenttiset ja ajantasaiset materiaalit tuovat opiskeluun konkretiaa ja kiinnostavuutta, mallit, simulaatiot ja paikkatieto havainnollistavat ilmiöitä. Viestintä- ja yhteistyövälineet sekä sosiaaliset informaatiotilat vahvistavat yhteistyötä. Avainsanat: verkko-opetus, internet, virtuaalimaailmat, maantiede, innovaatiot
Resumo:
We report on ongoing research to develop a design theory for classes of information systems that allow for work practices that exhibit a minimal harmful impact on the natural environment. We call such information systems Green IS. In this paper we describe the building blocks of our Green IS design theory, which develops prescriptions for information systems that allow for: (1) belief formation, action formation and outcome measurement relating to (2) environmentally sustainable work practices and environmentally sustainable decisions on (3) a macro or micro level. For each element, we specify structural features, symbolic expressions, user abilities and goals required for the affordances to emerge. We also provide a set of testable propositions derived from our design theory and declare two principles of implementation.
Resumo:
Background Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. Methods The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May–September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. Results The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Conclusions Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents’ safety.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
Synapses exhibit an extraordinary degree of short-term malleability, with release probabilities and effective synaptic strengths changing markedly over multiple timescales. From the perspective of a fixed computational operation in a network, this seems like a most unacceptable degree of added variability. We suggest an alternative theory according to which short-term synaptic plasticity plays a normatively-justifiable role. This theory starts from the commonplace observation that the spiking of a neuron is an incomplete, digital, report of the analog quantity that contains all the critical information, namely its membrane potential. We suggest that a synapse solves the inverse problem of estimating the pre-synaptic membrane potential from the spikes it receives, acting as a recursive filter. We show that the dynamics of short-term synaptic depression closely resemble those required for optimal filtering, and that they indeed support high quality estimation. Under this account, the local postsynaptic potential and the level of synaptic resources track the (scaled) mean and variance of the estimated presynaptic membrane potential. We make experimentally testable predictions for how the statistics of subthreshold membrane potential fluctuations and the form of spiking non-linearity should be related to the properties of short-term plasticity in any particular cell type.
Resumo:
I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.
Resumo:
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Resumo:
The Internet and World Wide Web have had, and continue to have, an incredible impact on our civilization. These technologies have radically influenced the way that society is organised and the manner in which people around the world communicate and interact. The structure and function of individual, social, organisational, economic and political life begin to resemble the digital network architectures upon which they are increasingly reliant. It is increasingly difficult to imagine how our ‘offline’ world would look or function without the ‘online’ world; it is becoming less meaningful to distinguish between the ‘actual’ and the ‘virtual’. Thus, the major architectural project of the twenty-first century is to “imagine, build, and enhance an interactive and ever changing cyberspace” (Lévy, 1997, p. 10). Virtual worlds are at the forefront of this evolving digital landscape. Virtual worlds have “critical implications for business, education, social sciences, and our society at large” (Messinger et al., 2009, p. 204). This study focuses on the possibilities of virtual worlds in terms of communication, collaboration, innovation and creativity. The concept of knowledge creation is at the core of this research. The study shows that scholars increasingly recognise that knowledge creation, as a socially enacted process, goes to the very heart of innovation. However, efforts to build upon these insights have struggled to escape the influence of the information processing paradigm of old and have failed to move beyond the persistent but problematic conceptualisation of knowledge creation in terms of tacit and explicit knowledge. Based on these insights, the study leverages extant research to develop the conceptual apparatus necessary to carry out an investigation of innovation and knowledge creation in virtual worlds. The study derives and articulates a set of definitions (of virtual worlds, innovation, knowledge and knowledge creation) to guide research. The study also leverages a number of extant theories in order to develop a preliminary framework to model knowledge creation in virtual worlds. Using a combination of participant observation and six case studies of innovative educational projects in Second Life, the study yields a range of insights into the process of knowledge creation in virtual worlds and into the factors that affect it. The study’s contributions to theory are expressed as a series of propositions and findings and are represented as a revised and empirically grounded theoretical framework of knowledge creation in virtual worlds. These findings highlight the importance of prior related knowledge and intrinsic motivation in terms of shaping and stimulating knowledge creation in virtual worlds. At the same time, they highlight the importance of meta-knowledge (knowledge about knowledge) in terms of guiding the knowledge creation process whilst revealing the diversity of behavioural approaches actually used to create knowledge in virtual worlds and. This theoretical framework is itself one of the chief contributions of the study and the analysis explores how it can be used to guide further research in virtual worlds and on knowledge creation. The study’s contributions to practice are presented as actionable guide to simulate knowledge creation in virtual worlds. This guide utilises a theoretically based classification of four knowledge-creator archetypes (the sage, the lore master, the artisan, and the apprentice) and derives an actionable set of behavioural prescriptions for each archetype. The study concludes with a discussion of the study’s implications in terms of future research.
Resumo:
La Cadena Datos-Información-Conocimiento (DIC), denominada “Jerarquía de la Información” o “Pirámide del Conocimiento”, es uno de los modelos más importantes en la Gestión de la Información y la Gestión del Conocimiento. Por lo general, la estructuración de la cadena se ha ido definiendo como una arquitectura en la que cada elemento se levanta sobre el elemento inmediatamente inferior; sin embargo no existe un consenso en la definición de los elementos, ni acerca de los procesos que transforman un elemento de un nivel a uno del siguiente nivel. En este artículo se realiza una revisión de la Cadena Datos-Información-Conocimiento examinando las definiciones más relevantes sobre sus elementos y sobre su articulación en la literatura, para sintetizar las acepciones más comunes. Se analizan los elementos de la Cadena DIC desde la semiótica de Peirce; enfoque que nos permite aclarar los significados e identificar las diferencias, las relaciones y los roles que desempeñan en la cadena desde el punto de vista del pragmatismo. Finalmente se propone una definición de la Cadena DIC apoyada en las categorías triádicas de signos y la semiosis ilimitada de Peirce, los niveles de sistemas de signos de Stamper y las metáforas de Zeleny.
Resumo:
Critical decisions are made by decision-makers throughout
the life-cycle of large-scale projects. These decisions are crucial as they
have a direct impact upon the outcome and the success of projects. To aid
decision-makers in the decision making process we present an evidential
reasoning framework. This approach utilizes the Dezert-Smarandache
theory to fuse heterogeneous evidence sources that suffer from levels
of uncertainty, imprecision and conflicts to provide beliefs for decision
options. To analyze the impact of source reliability and priority upon
the decision making process, a reliability discounting technique and a
priority discounting technique, are applied. A maximal consistent subset
is constructed to aid in dening where discounting should be applied.
Application of the evidential reasoning framework is illustrated using a
case study based in the Aerospace domain.