113 resultados para Conceptual Knowledge
em Université de Lausanne, Switzerland
Resumo:
1 6 STRUCTURE OF THIS THESIS -Chapter I presents the motivations of this dissertation by illustrating two gaps in the current body of knowledge that are worth filling, describes the research problem addressed by this thesis and presents the research methodology used to achieve this goal. -Chapter 2 shows a review of the existing literature showing that environment analysis is a vital strategic task, that it shall be supported by adapted information systems, and that there is thus a need for developing a conceptual model of the environment that provides a reference framework for better integrating the various existing methods and a more formal definition of the various aspect to support the development of suitable tools. -Chapter 3 proposes a conceptual model that specifies the various enviromnental aspects that are relevant for strategic decision making, how they relate to each other, and ,defines them in a more formal way that is more suited for information systems development. -Chapter 4 is dedicated to the evaluation of the proposed model on the basis of its application to a concrete environment to evaluate its suitability to describe the current conditions and potential evolution of a real environment and get an idea of its usefulness. -Chapter 5 goes a step further by assembling a toolbox describing a set of methods that can be used to analyze the various environmental aspects put forward by the model and by providing more detailed specifications for a number of them to show how our model can be used to facilitate their implementation as software tools. -Chapter 6 describes a prototype of a strategic decision support tool that allow the analysis of some of the aspects of the environment that are not well supported by existing tools and namely to analyze the relationship between multiple actors and issues. The usefulness of this prototype is evaluated on the basis of its application to a concrete environment. -Chapter 7 finally concludes this thesis by making a summary of its various contributions and by proposing further interesting research directions.
Resumo:
Despite the limited research on the effects of altitude (or hypoxic) training interventions on team-sport performance, players from all around the world engaged in these sports are now using altitude training more than ever before. In March 2013, an Altitude Training and Team Sports conference was held in Doha, Qatar, to establish a forum of research and practical insights into this rapidly growing field. A round-table meeting in which the panellists engaged in focused discussions concluded this conference. This has resulted in the present position statement, designed to highlight some key issues raised during the debates and to integrate the ideas into a shared conceptual framework. The present signposting document has been developed for use by support teams (coaches, performance scientists, physicians, strength and conditioning staff) and other professionals who have an interest in the practical application of altitude training for team sports. After more than four decades of research, there is still no consensus on the optimal strategies to elicit the best results from altitude training in a team-sport population. However, there are some recommended strategies discussed in this position statement to adopt for improving the acclimatisation process when training/competing at altitude and for potentially enhancing sea-level performance. It is our hope that this information will be intriguing, balanced and, more importantly, stimulating to the point that it promotes constructive discussion and serves as a guide for future research aimed at advancing the bourgeoning body of knowledge in the area of altitude training for team sports.
Resumo:
Gestures are the first forms of conventional communication that young children develop in order to intentionally convey a specific message. However, at first, infants rarely communicate successfully with their gestures, prompting caregivers to interpret them. Although the role of caregivers in early communication development has been examined, little is known about how caregivers attribute a specific communicative function to infants' gestures. In this study, we argue that caregivers rely on the knowledge about the referent that is shared with infants in order to interpret what communicative function infants wish to convey with their gestures. We videotaped interactions from six caregiver-infant dyads playing with toys when infants were 8, 10, 12, 14, and 16 months old. We coded infants' gesture production and we determined whether caregivers interpreted those gestures as conveying a clear communicative function or not; we also coded whether infants used objects according to their conventions of use as a measure of shared knowledge about the referent. Results revealed an association between infants' increasing knowledge of object use and maternal interpretations of infants' gestures as conveying a clear communicative function. Our findings emphasize the importance of shared knowledge in shaping infants' emergent communicative skills.
Resumo:
Intrarenal neurotransmission implies the co-release of neuropeptides at the neuro-effector junction with direct influence on parameters of kidney function. The presence of an angiotensin (Ang) II-containing phenotype in catecholaminergic postganglionic and sensory fibers of the kidney, based on immunocytological investigations, has only recently been reported. These angiotensinergic fibers display a distinct morphology and intrarenal distribution, suggesting anatomical and functional subspecialization linked to neuronal Ang II-expression. This review discusses the present knowledge concerning these fibers, and their significance for renal physiology and the pathogenesis of hypertension in light of established mechanisms. The data suggest a new role of Ang II as a co-transmitter stimulating renal target cells or modulating nerve traffic from or to the kidney. Neuronal Ang II is likely to be an independent source of intrarenal Ang II. Further physiological experimentation will have to explore the role of the angiotensinergic renal innervation and integrate it into existing concepts.
Resumo:
The recent developments in high magnetic field 13C magnetic resonance spectroscopy with improved localization and shimming techniques have led to important gains in sensitivity and spectral resolution of 13C in vivo spectra in the rodent brain, enabling the separation of several 13C isotopomers of glutamate and glutamine. In this context, the assumptions used in spectral quantification might have a significant impact on the determination of the 13C concentrations and the related metabolic fluxes. In this study, the time domain spectral quantification algorithm AMARES (advanced method for accurate, robust and efficient spectral fitting) was applied to 13 C magnetic resonance spectroscopy spectra acquired in the rat brain at 9.4 T, following infusion of [1,6-(13)C2 ] glucose. Using both Monte Carlo simulations and in vivo data, the goal of this work was: (1) to validate the quantification of in vivo 13C isotopomers using AMARES; (2) to assess the impact of the prior knowledge on the quantification of in vivo 13C isotopomers using AMARES; (3) to compare AMARES and LCModel (linear combination of model spectra) for the quantification of in vivo 13C spectra. AMARES led to accurate and reliable 13C spectral quantification similar to those obtained using LCModel, when the frequency shifts, J-coupling constants and phase patterns of the different 13C isotopomers were included as prior knowledge in the analysis.
Resumo:
Åknes is an active complex large rockslide of approximately 30?40 Mm3 located within the Proterozoic gneisses of western Norway. The observed surface displacements indicate that this rockslide is divided into several blocks moving in different directions at velocities of between 3 and 10 cm year?1. Because of regional safety issues and economic interests this rockslide has been extensively monitored since 2004. The understanding of the deformation mechanism is crucial for the implementation of a viable monitoring system. Detailed field investigations and the analysis of a digital elevation model (DEM) indicate that the movements and the block geometry are controlled by the main schistosity (S1) in gneisses, folds, joints and regional faults. Such complex slope deformations use pre-existing structures, but also result in new failure surfaces and deformation zones, like preferential rupture in fold-hinge zones. Our interpretation provides a consistent conceptual three-dimensional (3D) model for the movements measured by various methods that is crucial for numerical stability modelling. In addition, this reinterpretation of the morphology confirms that in the past several rockslides occurred from the Åknes slope. They may be related to scars propagating along the vertical foliation in folds hinges. Finally, a model of the evolution of the Åknes slope is presented.
Resumo:
The aim of this study is to perform a thorough comparison of quantitative susceptibility mapping (QSM) techniques and their dependence on the assumptions made. The compared methodologies were: two iterative single orientation methodologies minimizing the l2, l1TV norm of the prior knowledge of the edges of the object, one over-determined multiple orientation method (COSMOS) and anewly proposed modulated closed-form solution (MCF). The performance of these methods was compared using a numerical phantom and in-vivo high resolution (0.65mm isotropic) brain data acquired at 7T using a new coil combination method. For all QSM methods, the relevant regularization and prior-knowledge parameters were systematically changed in order to evaluate the optimal reconstruction in the presence and absence of a ground truth. Additionally, the QSM contrast was compared to conventional gradient recalled echo (GRE) magnitude and R2* maps obtained from the same dataset. The QSM reconstruction results of the single orientation methods show comparable performance. The MCF method has the highest correlation (corrMCF=0.95, r(2)MCF =0.97) with the state of the art method (COSMOS) with additional advantage of extreme fast computation time. The l-curve method gave the visually most satisfactory balance between reduction of streaking artifacts and over-regularization with the latter being overemphasized when the using the COSMOS susceptibility maps as ground-truth. R2* and susceptibility maps, when calculated from the same datasets, although based on distinct features of the data, have a comparable ability to distinguish deep gray matter structures.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
This paper explores the effects of human resource management (HRM) practices in Swiss small -to-medium enterprises (SMEs). More specifically, the main objective of this study is to assess the impacts of HRM practices developed in Swiss SMEs upon the commitment of knowledge workers. Using data from a survey of over 198 knowledge workers, this study shows the importance of looking closer at HRM practices and, furthermore, to really investigate the impacts of the different HRM practices on employees' commitment. Results show, for example, that organisational support, procedural justice and the reputation of the organisation may clearly influence knowledge workers' commitment, whereas other HRM practices such as involvement in the decision-making, skills management or even the degree of satisfaction with pay do not have any impact on knowledge workers' commitment.
Resumo:
OBJECTIVE: To assess the theoretical and practical knowledge of the Glasgow Coma Scale (GCS) by trained Air-rescue physicians in Switzerland. METHODS: Prospective anonymous observational study with a specially designed questionnaire. General knowledge of the GCS and its use in a clinical case were assessed. RESULTS: From 130 questionnaires send out, 103 were returned (response rate of 79.2%) and analyzed. Theoretical knowledge of the GCS was consistent for registrars, fellows, consultants and private practitioners active in physician-staffed helicopters. The clinical case was wrongly scored by 38 participants (36.9%). Wrong evaluation of the motor component occurred in 28 questionnaires (27.2%), and 19 errors were made for the verbal score (18.5%). Errors were made most frequently by registrars (47.5%, p = 0.09), followed by fellows (31.6%, p = 0.67) and private practitioners (18.4%, p = 1.00). Consultants made significantly less errors than the rest of the participating physicians (0%, p < 0.05). No statistically significant differences were shown between anesthetists, general practitioners, internal medicine trainees or others. CONCLUSION: Although the theoretical knowledge of the GCS by out-of-hospital physicians is correct, significant errors were made in scoring a clinical case. Less experienced physicians had a higher rate of errors. Further emphasis on teaching the GCS is mandatory.
Resumo:
Aim: The relative effectiveness of different methods of prevention of HIV transmission is a subject of debate that is renewed with the integration of each new method. The relative weight of values and evidence in decision-making is not always clearly defined. Debate is often confused, as the proponents of different approaches address the issue at different levels of implementation. This paper defines and delineates the successive levels of analysis of effectiveness, and proposes a conceptual framework to clarify debate. Method / Issue: Initially inspired from work on contraceptive effectiveness, a first version of the conceptual framework was published in 1993 with definition of the Condom Effectiveness Matrix (Spencer, 1993). The framework has since integrated and further developed thinking around distinctions made between efficacy and effectiveness and has been applied to HIV prevention in general. Three levels are defined: theoretical effectiveness (ThE), use-effectiveness (UseE) and population use-effectiveness (PopUseE). For example, abstinence and faithfulness, as proposed in the ABC strategy, have relatively high theoretical effectiveness but relatively low effectiveness at subsequent levels of implementation. The reverse is true of circumcision. Each level is associated with specific forms of scientific enquiry and associated research questions: basic and clinical sciences with ThE; clinical and social sciences with UseE; epidemiology and social, economic and political sciences with PopUseE. Similarly, the focus of investigation moves from biological organisms, to the individual at the physiological and then psychological, social and ecological level, and finally takes as perspective populations and societies as a whole. The framework may be applied to analyse issues on any approach. Hence, regarding consideration of HIV treatment as a means of prevention, examples of issues at each level would be: ThE: achieving adequate viral suppression and non-transmission to partners; UseE: facility and degree of adherence to treatment and medical follow-up; PopUseE: perceived validity of strategy, feasibility of achieving adequate population coverage. Discussion: Use of the framework clarifies the questions that need to be addressed at all levels in order to improve effectiveness. Furthermore, the interconnectedness and complementary nature of research from the different scientific disciplines and the relative contribution of each become apparent. The proposed framework could bring greater rationality to the prevention effectiveness debate and facilitate communication between stakeholders.
Resumo:
The capacity to learn to associate sensory perceptions with appropriate motor actions underlies the success of many animal species, from insects to humans. The evolutionary significance of learning has long been a subject of interest for evolutionary biologists who emphasize the bene¬fit yielded by learning under changing environmental conditions, where it is required to flexibly switch from one behavior to another. However, two unsolved questions are particularly impor¬tant for improving our knowledge of the evolutionary advantages provided by learning, and are addressed in the present work. First, because it is possible to learn the wrong behavior when a task is too complex, the learning rules and their underlying psychological characteristics that generate truly adaptive behavior must be identified with greater precision, and must be linked to the specific ecological problems faced by each species. A framework for predicting behavior from the definition of a learning rule is developed here. Learning rules capture cognitive features such as the tendency to explore, or the ability to infer rewards associated to unchosen actions. It is shown that these features interact in a non-intuitive way to generate adaptive behavior in social interactions where individuals affect each other's fitness. Such behavioral predictions are used in an evolutionary model to demonstrate that, surprisingly, simple trial-and-error learn¬ing is not always outcompeted by more computationally demanding inference-based learning, when population members interact in pairwise social interactions. A second question in the evolution of learning is its link with and relative advantage compared to other simpler forms of phenotypic plasticity. After providing a conceptual clarification on the distinction between genetically determined vs. learned responses to environmental stimuli, a new factor in the evo¬lution of learning is proposed: environmental complexity. A simple mathematical model shows that a measure of environmental complexity, the number of possible stimuli in one's environ¬ment, is critical for the evolution of learning. In conclusion, this work opens roads for modeling interactions between evolving species and their environment in order to predict how natural se¬lection shapes animals' cognitive abilities. - La capacité d'apprendre à associer des sensations perceptives à des actions motrices appropriées est sous-jacente au succès évolutif de nombreuses espèces, depuis les insectes jusqu'aux êtres hu¬mains. L'importance évolutive de l'apprentissage est depuis longtemps un sujet d'intérêt pour les biologistes de l'évolution, et ces derniers mettent l'accent sur le bénéfice de l'apprentissage lorsque les conditions environnementales sont changeantes, car dans ce cas il est nécessaire de passer de manière flexible d'un comportement à l'autre. Cependant, deux questions non résolues sont importantes afin d'améliorer notre savoir quant aux avantages évolutifs procurés par l'apprentissage. Premièrement, puisqu'il est possible d'apprendre un comportement incorrect quand une tâche est trop complexe, les règles d'apprentissage qui permettent d'atteindre un com¬portement réellement adaptatif doivent être identifiées avec une plus grande précision, et doivent être mises en relation avec les problèmes écologiques spécifiques rencontrés par chaque espèce. Un cadre théorique ayant pour but de prédire le comportement à partir de la définition d'une règle d'apprentissage est développé ici. Il est démontré que les caractéristiques cognitives, telles que la tendance à explorer ou la capacité d'inférer les récompenses liées à des actions non ex¬périmentées, interagissent de manière non-intuitive dans les interactions sociales pour produire des comportements adaptatifs. Ces prédictions comportementales sont utilisées dans un modèle évolutif afin de démontrer que, de manière surprenante, l'apprentissage simple par essai-et-erreur n'est pas toujours battu par l'apprentissage basé sur l'inférence qui est pourtant plus exigeant en puissance de calcul, lorsque les membres d'une population interagissent socialement par pair. Une deuxième question quant à l'évolution de l'apprentissage concerne son lien et son avantage relatif vis-à-vis d'autres formes plus simples de plasticité phénotypique. Après avoir clarifié la distinction entre réponses aux stimuli génétiquement déterminées ou apprises, un nouveau fac¬teur favorisant l'évolution de l'apprentissage est proposé : la complexité environnementale. Un modèle mathématique permet de montrer qu'une mesure de la complexité environnementale - le nombre de stimuli rencontrés dans l'environnement - a un rôle fondamental pour l'évolution de l'apprentissage. En conclusion, ce travail ouvre de nombreuses perspectives quant à la mo¬délisation des interactions entre les espèces en évolution et leur environnement, dans le but de comprendre comment la sélection naturelle façonne les capacités cognitives des animaux.
Resumo:
La gestion des risques est souvent appréhendée par l'utilisation de méthodes linéaires mettant l'accent sur des raisonnements de positionnement et de type causal : à tel événement correspond tel risque et telle conséquence. Une prise en compte des interrelations entre risques est souvent occultée et les risques sont rarement analysés dans leurs dynamiques et composantes non linéaires. Ce travail présente ce que les méthodes systémiques et notamment l'étude des systèmes complexes sont susceptibles d'apporter en matière de compréhension, de management et d'anticipation et de gestion des risques d'entreprise, tant sur le plan conceptuel que de matière appliquée. En partant des définitions relatives aux notions de systèmes et de risques dans différents domaines, ainsi que des méthodes qui sont utilisées pour maîtriser les risques, ce travail confronte cet ensemble à ce qu'apportent les approches d'analyse systémique et de modélisation des systèmes complexes. En mettant en évidence les effets parfois réducteurs des méthodes de prise en compte des risques en entreprise ainsi que les limitations des univers de risques dues, notamment, à des définitions mal adaptées, ce travail propose également, pour la Direction d'entreprise, une palette des outils et approches différentes, qui tiennent mieux compte de la complexité, pour gérer les risques, pour aligner stratégie et management des risques, ainsi que des méthodes d'analyse du niveau de maturité de l'entreprise en matière de gestion des risques. - Risk management is often assessed through linear methods which stress positioning and causal logical frameworks: to such events correspond such consequences and such risks accordingly. Consideration of the interrelationships between risks is often overlooked and risks are rarely analyzed in their dynamic and nonlinear components. This work shows what systemic methods, including the study of complex systems, are likely to bring to knowledge, management, anticipation of business risks, both on the conceptual and the practical sides. Based on the definitions of systems and risks in various areas, as well as methods used to manage risk, this work confronts these concepts with approaches of complex systems analysis and modeling. This work highlights the reducing effects of some business risk analysis methods as well as limitations of risk universes caused in particular by unsuitable definitions. As a result this work also provides chief officers with a range of different tools and approaches which allows them a better understanding of complexity and as such a gain in efficiency in their risk management practices. It results in a better fit between strategy and risk management. Ultimately the firm gains in its maturity of risk management.