111 resultados para Threshold concept theory
em Université de Lausanne, Switzerland
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
Introduction. In autism and schizophrenia attenuated/atypical functional hemispheric asymmetry and theory of mind impairments have been reported, suggesting common underlying neuroscientific correlates. We here investigated whether impaired theory of mind performance is associated with attenuated/atypical hemispheric asymmetry. An association may explain the co-occurrence of both dysfunctions in psychiatric populations. Methods. Healthy participants (n 129) performed a left hemisphere (lateralised lexical decision task) and right hemisphere (lateralised face decision task) dominant task as well as a visual cartoon task to assess theory of mind performance. Results. Linear regression analyses revealed inconsistent associations between theory of mind performance and functional hemisphere asymmetry: enhanced theory of mind performance was only associated with (1) faster right hemisphere language processing, and (2) reduced right hemisphere dominance for face processing (men only). Conclusions. The majority of non-significant findings suggest that theory of mind and functional hemispheric asymmetry are unrelated. Instead of ''overinterpreting'' the two significant results, discrepancies in the previous literature relating to the problem of the theory of mind concept, the variety of tasks, and the lack of normative data are discussed. We also suggest how future studies could explore a possible link between hemispheric asymmetry and theory of mind.
Resumo:
Le concept de test relationnel (test, en anglais ; Weiss et Sampson, 1986 [16]) est présenté. Ses origines dans les écrits de Freud sont brièvement retracées et son inscription dans la théorie des croyances pathogènes de Weiss présentée. Par ailleurs, les autres éléments de la théorie psychanalytique de Weiss sont présentés (buts thérapeutiques, obstacles, traumas, insight, test relationnel). Toutes ces étapes sont illustrées par des exemples tirés de la littérature. Un développement récent du concept de test relationnel est présenté et appliqué à la psychothérapie des troubles de la personnalité (Sachse, 2003 [14]). Finalement, les auteurs donnent deux brefs exemples de tests relationnels tirés de leur propre pratique de psychothérapeute et discutent des modèles en les comparant entre eux. Des conclusions concernant l'utilité du concept de test relationnel pour la pratique psychothérapeutique et la recherche en psychothérapie sont proposées. The test concept (Weiss and Sampson, 1986 [16]) is presented. Its origins in Freud's works are briefly evoked and its place within the theory of pathogenic beliefs by Weiss presented. We present also the remaining elements of Weiss' psychoanalytic theory which are objectives, obstacles, traumas and insight. Every step of the reflection is illustrated with case examples, drawn from the literature. A recent development of the test concept is presented and applied to the psychotherapy of personality disorders (Sachse, 2003 [14]). Finally, the authors give brief examples of tests having occurred in their own practice as psychotherapists and discuss the models by comparing them among each other. Conclusions are drawn concerning the usefulness of the test concept for psychotherapy practice and research.
Resumo:
This article builds on the recent policy diffusion literature and attempts to overcome one of its major problems, namely the lack of a coherent theoretical framework. The literature defines policy diffusion as a process where policy choices are interdependent, and identifies several diffusion mechanisms that specify the link between the policy choices of the various actors. As these mechanisms are grounded in different theories, theoretical accounts of diffusion currently have little internal coherence. In this article we put forward an expected-utility model of policy change that is able to subsume all the diffusion mechanisms. We argue that the expected utility of a policy depends on both its effectiveness and the payoffs it yields, and we show that the various diffusion mechanisms operate by altering these two parameters. Each mechanism affects one of the two parameters, and does so in distinct ways. To account for aggregate patterns of diffusion, we embed our model in a simple threshold model of diffusion. Given the high complexity of the process that results, strong analytical conclusions on aggregate patterns cannot be drawn without more extensive analysis which is beyond the scope of this article. However, preliminary considerations indicate that a wide range of diffusion processes may exist and that convergence is only one possible outcome.
Resumo:
Carbon isotope ratio (CIR) analysis has been routinely and successfully used in sports drug testing for many years to uncover the misuse of endogenous steroids. One limitation of the method is the availability of steroid preparations exhibiting CIRs equal to endogenous steroids. To overcome this problem, hydrogen isotope ratios (HIR) of endogenous urinary steroids were investigated as a potential complement; results obtained from a reference population of 67 individuals are presented herein. An established sample preparation method was modified and improved to enable separate measurements of each analyte of interest where possible. From the fraction of glucuronidated steroids; pregnanediol, 16-androstenol, 11-ketoetiocholanolone, androsterone (A), etiocholanolone (E), dehydroepiandrosterone (D), 5α- and 5β-androstanediol, testosterone and epitestosterone were included. In addition, sulfate conjugates of A, E, D, epiandrosterone and 17α- and 17β-androstenediol were considered and analyzed after acidic solvolysis. The obtained results enabled the calculation of the first reference-population-based thresholds for HIR of urinary steroids that can readily be applied to routine doping control samples. Proof-of-concept was accomplished by investigating urine specimens collected after a single oral application of testosterone-undecanoate. The HIR of most testosterone metabolites were found to be significantly influenced by the exogenous steroid beyond the established threshold values. Additionally, one regular doping control sample with an extraordinary testosterone/epitestosterone ratio of 100 without suspicious CIR was subjected to the complementary methodology of HIR analysis. The HIR data eventually provided evidence for the exogenous origin of urinary testosterone metabolites. Despite further investigations on HIR being advisable to corroborate the presented reference-population-based thresholds, the developed method proved to be a new tool supporting modern sports drug testing procedures.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
After years of reciprocal lack of interest, if not opposition, neuroscience and psychoanalysis are poised for a renewed dialogue. This article discusses some aspects of the Freudian metapsychology and its link with specific biological mechanisms. It highlights in particular how the physiological concept of homeostasis resonates with certain fundamental concepts of psychoanalysis. Similarly, the authors underline how the Freud and Damasio theories of brain functioning display remarkable complementarities, especially through their common reference to Meynert and James. Furthermore, the Freudian theory of drives is discussed in the light of current neurobiological evidences of neural plasticity and trace formation and of their relationships with the processes of homeostasis. The ensuing dynamics between traces and homeostasis opens novel avenues to consider inner life in reference to the establishment of fantasies unique to each subject. The lack of determinism, within a context of determinism, implied by plasticity and reconsolidation participates in the emergence of singularity, the creation of uniqueness and the unpredictable future of the subject. There is a gap in determinism inherent to biology itself. Uniqueness and discontinuity: this should today be the focus of the questions raised in neuroscience. Neuroscience needs to establish the new bases of a "discontinuous" biology. Psychoanalysis can offer to neuroscience the possibility to think of discontinuity. Neuroscience and psychoanalysis meet thus in an unexpected way with regard to discontinuity and this is a new point of convergence between them.
Resumo:
In decision making, speed-accuracy trade-offs are well known and often inevitable because accuracy depends on being well informed and gathering information takes time. However, trade-offs between speed and cohesion, that is the degree to which a group remains together as a single entity, as a result of their decision making, have been comparatively neglected. We combine theory and experimentation to show that in decision-making systems, speed-cohesion trade-offs are a natural complement to speed-accuracy trade-offs and are therefore of general importance. We then analyse the decision performance of 32 rock ant, Temnothorax albipennis, colonies in experiments in which accuracy of collective decision making was held constant, but time urgency varied. These experiments reveal for the first time an adaptive speed-cohesion trade-off in collective decision making and how this is achieved. In accord with different time constraints, colonies can decide quickly, at the cost of social unity, or they can decide slowly with much greater cohesion. We discuss the similarity between cohesion and the term precision as used in statistics and engineering. This emphasizes the generality of speed versus cohesion/precision trade-offs in decision making and decision implementation in other fields within animal behaviour such as sexually selected motor displays and even certain aspects of birdsong. We also suggest that speed versus precision trade-offs may occur when individuals within a group need to synchronize their activity, and in collective navigation, cooperative hunting and in certain escape behaviours.
Resumo:
The theory of language has occupied a special place in the history of Indian thought. Indian philosophers give particular attention to the analysis of the cognition obtained from language, known under the generic name of śābdabodha. This term is used to denote, among other things, the cognition episode of the hearer, the content of which is described in the form of a paraphrase of a sentence represented as a hierarchical structure. Philosophers submit the meaning of the component items of a sentence and their relationship to a thorough examination, and represent the content of the resulting cognition as a paraphrase centred on a meaning element, that is taken as principal qualificand (mukhyaviśesya) which is qualified by the other meaning elements. This analysis is the object of continuous debate over a period of more than a thousand years between the philosophers of the schools of Mimāmsā, Nyāya (mainly in its Navya form) and Vyākarana. While these philosophers are in complete agreement on the idea that the cognition of sentence meaning has a hierarchical structure and share the concept of a single principal qualificand (qualified by other meaning elements), they strongly disagree on the question which meaning element has this role and by which morphological item it is expressed. This disagreement is the central point of their debate and gives rise to competing versions of this theory. The Mïmāmsakas argue that the principal qualificand is what they call bhāvanā ̒bringing into being̒, ̒efficient force̒ or ̒productive operation̒, expressed by the verbal affix, and distinct from the specific procedures signified by the verbal root; the Naiyāyikas generally take it to be the meaning of the word with the first case ending, while the Vaiyākaranas take it to be the operation expressed by the verbal root. All the participants rely on the Pāninian grammar, insofar as the Mimāmsakas and Naiyāyikas do not compose a new grammar of Sanskrit, but use different interpretive strategies in order to justify their views, that are often in overt contradiction with the interpretation of the Pāninian rules accepted by the Vaiyākaranas. In each of the three positions, weakness in one area is compensated by strength in another, and the cumulative force of the total argumentation shows that no position can be declared as correct or overall superior to the others. This book is an attempt to understand this debate, and to show that, to make full sense of the irreconcilable positions of the three schools, one must go beyond linguistic factors and consider the very beginnings of each school's concern with the issue under scrutiny. The texts, and particularly the late texts of each school present very complex versions of the theory, yet the key to understanding why these positions remain irreconcilable seems to lie elsewhere, this in spite of extensive argumentation involving a great deal of linguistic and logical technicalities. Historically, this theory arises in Mimāmsā (with Sabara and Kumārila), then in Nyāya (with Udayana), in a doctrinal and theological context, as a byproduct of the debate over Vedic authority. The Navya-Vaiyākaranas enter this debate last (with Bhattoji Dïksita and Kaunda Bhatta), with the declared aim of refuting the arguments of the Mïmāmsakas and Naiyāyikas by bringing to light the shortcomings in their understanding of Pāninian grammar. The central argument has focused on the capacity of the initial contexts, with the network of issues to which the principal qualificand theory is connected, to render intelligible the presuppositions and aims behind the complex linguistic justification of the classical and late stages of this debate. Reading the debate in this light not only reveals the rationality and internal coherence of each position beyond the linguistic arguments, but makes it possible to understand why the thinkers of the three schools have continued to hold on to three mutually exclusive positions. They are defending not only their version of the principal qualificand theory, but (though not openly acknowledged) the entire network of arguments, linguistic and/or extra-linguistic, to which this theory is connected, as well as the presuppositions and aims underlying these arguments.
Resumo:
PURPOSE: All methods presented to date to map both conductivity and permittivity rely on multiple acquisitions to compute quantitatively the magnitude of radiofrequency transmit fields, B1+. In this work, we propose a method to compute both conductivity and permittivity based solely on relative receive coil sensitivities ( B1-) that can be obtained in one single measurement without the need to neither explicitly perform transmit/receive phase separation nor make assumptions regarding those phases. THEORY AND METHODS: To demonstrate the validity and the noise sensitivity of our method we used electromagnetic finite differences simulations of a 16-channel transceiver array. To experimentally validate our methodology at 7 Tesla, multi compartment phantom data was acquired using a standard 32-channel receive coil system and two-dimensional (2D) and 3D gradient echo acquisition. The reconstructed electric properties were correlated to those measured using dielectric probes. RESULTS: The method was demonstrated both in simulations and in phantom data with correlations to both the modeled and bench measurements being close to identity. The noise properties were modeled and understood. CONCLUSION: The proposed methodology allows to quantitatively determine the electrical properties of a sample using any MR contrast, with the only constraint being the need to have 4 or more receive coils and high SNR. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.
Resumo:
Ultrasound image reconstruction from the echoes received by an ultrasound probe after the transmission of diverging waves is an active area of research because of its capacity to insonify at ultra-high frame rate with large regions of interest using small phased arrays as the ones used in echocardiography. Current state-of-the-art techniques are based on the emission of diverging waves and the use of delay and sum strategies applied on the received signals to reconstruct the desired image (DW/DAS). Recently, we have introduced the concept of Ultrasound Fourier Slice Imaging (UFSI) theory for the reconstruction of ultrafast imaging for linear acquisition. In this study, we extend this theory to sectorial acquisition thanks to the introduction of an explicit and invertible spatial transform. Starting from a diverging wave, we show that the direct use of UFSI theory along with the application of the proposed spatial transform allows reconstructing the insonified medium in the conventional Cartesian space. Simulations and experiments reveal the capacity of this new approach in obtaining competitive quality of ultrafast imaging when compared with the current reference method.
Resumo:
Business research and teaching institutions play an important role in shaping the way businesses perceive their relations to the broader society and its moral expectations. Hence, as ethical scandals recently arose in the business world, questions related to the civic responsibilities of business scholars and to the role business schools play in society have gained wider interest. In this article, I argue that these ethical shortcomings are at least partly resulting from the mainstream business model with its taken-for granted basic assumptions such as specialization or the value-neutrality of business research. Redefining the roles and civic responsibilities of business scholars for business practice implies therefore a thorough analysis of these assumptions if not their redefinition. The takenforgrantedness of the mainstream business model is questioned by the transformation of the societal context in which business activities are embedded. Its value-neutrality in turn is challenged by self-fulfilling prophecy effects, which highlight the normative influence of business schools. In order to critically discuss some basic assumptions of mainstream business theory, I propose to draw parallels with the corporate citizenship concept and the stakeholder theory. Their integrated approach of the relation between business practice and the broader society provides interesting insights for the social reembedding of business research and teaching.
Resumo:
Introduction. Selective embolization of the left-gastric artery (LGA) reduces levels of ghrelin and achieves significant short-term weight loss. However, embolization of the LGA would prevent the performance of bariatric procedures because the high-risk leakage area (gastroesophageal junction [GEJ]) would be devascularized. Aim. To assess an alternative vascular approach to the modulation of ghrelin levels and generate a blood flow manipulation, consequently increasing the vascular supply to the GEJ. Materials and methods. A total of 6 pigs underwent a laparoscopic clipping of the left gastroepiploic artery. Preoperative and postoperative CT angiographies were performed. Ghrelin levels were assessed perioperatively and then once per week for 3 weeks. Reactive oxygen species (ROS; expressed as ROS/mg of dry weight [DW]), mitochondria respiratory rate, and capillary lactates were assessed before and 1 hour after clipping (T0 and T1) and after 3 weeks of survival (T2), on seromuscular biopsies. A celiac trunk angiography was performed at 3 weeks. Results. Mean (±standard deviation) ghrelin levels were significantly reduced 1 hour after clipping (1902 ± 307.8 pg/mL vs 1084 ± 680.0; P = .04) and at 3 weeks (954.5 ± 473.2 pg/mL; P = .01). Mean ROS levels were statistically significantly decreased at the cardia at T2 when compared with T0 (0.018 ± 0.006 mg/DW vs 0.02957 ± 0.0096 mg/DW; P = .01) and T1 (0.0376 ± 0.008mg/DW; P = .007). Capillary lactates were significantly decreased after 3 weeks, and the mitochondria respiratory rate remained constant over time at the cardia and pylorus, showing significant regional differences. Conclusions. Manipulation of the gastric flow targeting the gastroepiploic arcade induces ghrelin reduction. An endovascular approach is currently under evaluation.
Resumo:
Sleep spindles are synchronized 11-15 Hz electroencephalographic (EEG) oscillations predominant during nonrapid-eye-movement sleep (NREMS). Rhythmic bursting in the reticular thalamic nucleus (nRt), arising from interplay between Ca(v)3.3-type Ca(2+) channels and Ca(2+)-dependent small-conductance-type 2 (SK2) K(+) channels, underlies spindle generation. Correlative evidence indicates that spindles contribute to memory consolidation and protection against environmental noise in human NREMS. Here, we describe a molecular mechanism through which spindle power is selectively extended and we probed the actions of intensified spindling in the naturally sleeping mouse. Using electrophysiological recordings in acute brain slices from SK2 channel-overexpressing (SK2-OE) mice, we found that nRt bursting was potentiated and thalamic circuit oscillations were prolonged. Moreover, nRt cells showed greater resilience to transit from burst to tonic discharge in response to gradual depolarization, mimicking transitions out of NREMS. Compared with wild-type littermates, chronic EEG recordings of SK2-OE mice contained less fragmented NREMS, while the NREMS EEG power spectrum was conserved. Furthermore, EEG spindle activity was prolonged at NREMS exit. Finally, when exposed to white noise, SK2-OE mice needed stronger stimuli to arouse. Increased nRt bursting thus strengthens spindles and improves sleep quality through mechanisms independent of EEG slow waves (<4 Hz), suggesting SK2 signaling as a new potential therapeutic target for sleep disorders and for neuropsychiatric diseases accompanied by weakened sleep spindles.