50 resultados para Fundamentals of computing theory
Resumo:
Intracardiac organization indices such as atrial fibril- lation (AF) cycle length (AFCL) have been used to track the efficiency of stepwise catheter ablation (step-CA) of long-standing persistent AF (pers-AF), however, with lim- ited success. The timing between nearby bipolar intracar- diac electrograms (EGMs) reflects the spatial dynamics of wavelets during AF. The extent of synchronization between EGMs is an indirect measure of AF spatial organization. The synchronization between nearby EGMs during step- CA of pers-AF was evaluated using new indices based on the cross-correlation. The first one (spar(W)) quantifies the sparseness of the cross-correlation of local activation times. The second one (OI(W)) reflects the local concen- tration around the largest peak of the cross-correlation. By computing their relative evolution during step-CA until AF termination (AF-term), we found that OI(W) appeared su- perior to AFCL and spar(W) to track the effect of step-CA "en route" to AF-term.
Resumo:
Until recently, much of the discussion regarding the type of organization theory needed in management studies focused on normative vs. descriptive roles of management science. Some authors however noticed that even a descriptive theory can have a normative impact. Among others, management theories are used by practitioners to make sense of their identity and roles in given contexts, and so guide their attitude, decision process, and behavior. The sensemaking potential of a theory might in this view represent an important element for predicting the adoption of a theory by practitioners. Accordingly, theories are needed which better grasp the increased complexity of today's business environment in order to be more relevant for practitioners. This article proposes a multi-faceted perspective of organizations. This implies leaving a simplistic view of organizations and building a 'cubist' conception. Picasso's cubism paintings are characterized by the use of multiple perspectives within a single drawing. Similarly, I argue here that managers must learn not only to add multiple responsibilities in their work, but to develop an integrated conception of their managerial identity and of their organizations in which the multiple social and economic dimensions are enmeshed. Social entrepreneurship is discussed as illustration of typical multi-faceted business.
Resumo:
The dissertation investigates some relevant metaphysical issues arising in the context of spacetime theories. In particular, the inquiry focuses on general relativity and canonical quantum gravity. A formal definition of spacetime theory is proposed and, against this framework, an analysis of the notions of general covariance, symmetry and background independence is performed. It is argued that many conceptual issues in general relativity and canonical quantum gravity derive from putting excessive emphasis on general covariance as an ontological prin-ciple. An original metaphysical position grounded in scientific essential- ism and causal realism (weak essentialism) is developed and defended. It is argued that, in the context of general relativity, weak essentialism supports spacetime substantivalism. It is also shown that weak essentialism escapes arguments from metaphysical underdetermination by positing a particular kind of causation, dubbed geometric. The proposed interpretive framework is then applied to Bohmian mechanics, pointing out that weak essentialism nicely fits into this theory. In the end, a possible Bohmian implementation of loop quantum gravity is considered, and such a Bohmian approach is interpreted in a geometric causal fashion. Under this interpretation, Bohmian loop quantum gravity straightforwardly commits us to an ontology of elementary extensions of space whose evolution is described by a non-local law. The causal mechanism underlying this evolution clarifies many conceptual issues related to the emergence of classical spacetime from the quantum regime. Although there is as yet no fully worked out physical theory of quantum gravity, it is argued that the proposed approach sets up a standard that proposals for a serious ontology in this field should meet.
Resumo:
With contributions from leading authors in the most important areas of current research, this book provides insight into the streams that are driving leadership theory and practice today. The Nature of Leadership, Second Edition provides students with an updated and complete yet concise handbook that solidifies and integrates the vast and disparate leadership literature.Key Features of the Second Edition· Provides contributions from twenty-three subject-matter experts-ranging from the eminent to the up-and-coming-giving students an unsurpassed breadth of knowledge and perspective· Organizes the material into the three key thematic areas of Leadership-Science, Nature, and Nurture; the Major Schools of Leadership; and Leadership and Special Domains· Includes nine brand new chapters that provide students with the state-of-the-art of leadership theory and practice such as evolutionary and biological perspectives, individual differences, and shared leadership· Updates the content of seven retained chapters, with reference to recent research and developments in the field· Adds pedagogical features, including discussion questions, a list of practice-focused supplemental readings, and references to case studies
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
In der Pflegewissenschaft geben der Einsatz von Theorien und der daraus folgende Gewinn immer wieder Anlass zu Diskussionen. Ein Hauptvorwurf ist, dass Pflegetheorien als sehr abstrakt und wenig praxisnah gelten. Jedoch gibt es wenige Indikatoren, um das Abstraktionsniveau von Theorien und die damit verbundene Reichweite zu bestimmen. Im vorliegenden Artikel werden Fragen basierend auf die Definitionen und Grundannahmen von Theorien erstellt. Damit werden anschließend drei ausgesuchte Theorien auf ihr Abstraktionsniveau und Reichweite untersucht. Es wurden 18 Fragen zu den drei Bereichen ,,Zweck der Theorie", ,,Aufgabe der Theorie" und ,,Beschreibung der Theorie" entwickelt. Diese 18 Fragen wurden auf die Theorie der Adaptation von Sister Callista Roy, die Theorie zur Unsicherheit von Merle M. Mishel und die Theorie der Omnipräsenz von Krebs von Maya Shaha angewendet. [The use of nursing theories and their associated benefits remain an area of repeated discussion in nursing. One of the main objections is that nursing theories are abstract and therefore cannot be easily applied to practice. However, only few indicators exist to help identify a theory's level of abstraction or its scope. In this article, questions based on definitions and assumptions of theories have been developed. These questions have then been applied to three selected theories to investigate their level of abstraction and scope. A total of 18 questions divided into three domains were developed. The three categories were: ,,the purpose of the theory", ,,the aim of the theory" and ,,the description of the theory". The theory of Adaptation by Sister Callista Roy, the Theory of Uncertainty by Merle M. Mishel and the Theory of the Omnipresence of Cancer by Maya Shaha were selected to be analysed following the three domains with the 18 questions.]
Resumo:
This paper presents and discusses further aspects of the subjectivist interpretation of probability (also known as the 'personalist' view of probabilities) as initiated in earlier forensic and legal literature. It shows that operational devices to elicit subjective probabilities - in particular the so-called scoring rules - provide additional arguments in support of the standpoint according to which categorical claims of forensic individualisation do not follow from a formal analysis under that view of probability theory.
Resumo:
This paper examines the use of the medical metaphor in the early theories of crises. It first considers the borrowing of medical terminology and generic references to disease which, notwithstanding their relatively trivial character, illustrate how crises were originally conceived as disturbances (often of a political nature) to a naturally healthy system. Then it shows how a more specific metaphor, the fever of speculation, shifted the emphasis by treating prosperity as the diseased phase, to which crises are a remedy. The metaphor of the epidemic spreading of the disease introduced the theme of the cumulative character of both upswing and downswing, while the similitude with intermittent fevers accounted for the recurring nature of crises. Finally, the paper examines how the medical reflections on the causality of diseases contributed to the epistemology of crises theory, and reflects on the metaphisical shift accompanying the transition from the theories of crises to the theories of cycles.
Resumo:
Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.
Resumo:
It is commonly found that young people tend to adopt the political party choice of their parents. However, far less is known about the applicability of this theory when investigating radical right support. Using the Swiss Household panel data (1999e2007), this study empirically identifies the relationship between parents' preference for the Swiss radical right party SVP and their attitudes toward immigrants and the EU, and their offspring's preference for the SVP. Disaggregating fathers' and mothers' influence reveals that in particular, mothers' SVP support plays a role in SVP support among young people, even after controlling for educational similarities. We also demonstrate that girls are more likely to be influenced by their mothers than are boys. Furthermore, parents' negative attitudes toward the EU exert a positive influence on their children's radical right voting, independent of their voting pattern.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Game theory is a branch of applied mathematics used to analyze situation where two or more agents are interacting. Originally it was developed as a model for conflicts and collaborations between rational and intelligent individuals. Now it finds applications in social sciences, eco- nomics, biology (particularly evolutionary biology and ecology), engineering, political science, international relations, computer science, and philosophy. Networks are an abstract representation of interactions, dependencies or relationships. Net- works are extensively used in all the fields mentioned above and in many more. Many useful informations about a system can be discovered by analyzing the current state of a network representation of such system. In this work we will apply some of the methods of game theory to populations of agents that are interconnected. A population is in fact represented by a network of players where one can only interact with another if there is a connection between them. In the first part of this work we will show that the structure of the underlying network has a strong influence on the strategies that the players will decide to adopt to maximize their utility. We will then introduce a supplementary degree of freedom by allowing the structure of the population to be modified along the simulations. This modification allows the players to modify the structure of their environment to optimize the utility that they can obtain.
Resumo:
Arising from M. A. Nowak, C. E. Tarnita & E. O. Wilson 466, 1057-1062 (2010); Nowak et al. reply. Nowak et al. argue that inclusive fitness theory has been of little value in explaining the natural world, and that it has led to negligible progress in explaining the evolution of eusociality. However, we believe that their arguments are based upon a misunderstanding of evolutionary theory and a misrepresentation of the empirical literature. We will focus our comments on three general issues.
Resumo:
BACKGROUND: There is an emerging knowledge base on the effectiveness of strategies to close the knowledge-practice gap. However, less is known about how attributes of an innovation and other contextual and situational factors facilitate and impede an innovation's adoption. The Healthy Heart Kit (HHK) is a risk management and patient education resource for the prevention of cardiovascular disease (CVD) and promotion of cardiovascular health. Although previous studies have demonstrated the HHK's content validity and practical utility, no published study has examined physicians' uptake of the HHK and factors that shape its adoption. OBJECTIVES: Conceptually informed by Rogers' Diffusion of Innovation theory, and Theory of Planned Behaviour, this study had two objectives: (1) to determine if specific attributes of the HHK as well as contextual and situational factors are associated with physicians' intention and actual usage of the HHK kit; and (2), to determine if any contextual and situational factors are associated with individual or environmental barriers that prevent the uptake of the HHK among those physicians who do not plan to use the kit. METHODS: A sample of 153 physicians who responded to an invitation letter sent to all family physicians in the province of Alberta, Canada were recruited for the study. Participating physicians were sent a HHK, and two months later a study questionnaire assessed primary factors on the physicians' clinical practice, attributes of the HHK (relative advantage, compatibility, complexity, trialability, observability), confidence and control using the HHK, barriers to use, and individual attributes. All measures were used in path analysis, employing a causal model based on Rogers' Diffusion of Innovations Theory and Theory of Planned Behaviour. RESULTS: 115 physicians (follow up rate of 75%) completed the questionnaire. Use of the HHK was associated with intention to use the HHK, relative advantage, and years of experience. Relative advantage and the observability of the HHK benefits were also significantly associated with physicians' intention to use the HHK. Physicians working in solo medical practices reported experiencing more individual and environmental barriers to using the HHK. CONCLUSION: The results of this study suggest that future information innovations must demonstrate an advantage over current resources and the research evidence supporting the innovation must be clearly visible. Findings also suggest that the innovation adoption process has a social element, and collegial interactions and discussions may facilitate that process. These results could be valuable for knowledge translation researchers and health promotion developers in future innovation adoption planning.