21 resultados para Critical Point Theory

em Université de Lausanne, Switzerland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A liquid chromatography method coupled to mass spectrometry was developed for the quantification of bupropion, its metabolite hydroxy-bupropion, moclobemide, reboxetine and trazodone in human plasma. The validation of the analytical procedure was assessed according to Société Française des Sciences et Techniques Pharmaceutiques and the latest Food and Drug Administration guidelines. The sample preparation was performed with 0.5mL of plasma extracted on a cation-exchange solid phase 96-well plate. The separation was achieved in 14min on a C18 XBridge column (2.1mm×100mm, 3.5μm) using a 50mM ammonium acetate pH 9/acetonitrile mobile phase in gradient mode. The compounds of interest were analysed in the single ion monitoring mode on a single quadrupole mass spectrometer working in positive electrospray ionisation mode. Two ions were selected per molecule to increase the number of identification points and to avoid as much as possible any false positives. Since selectivity is always a critical point for routine therapeutic drug monitoring, more than sixty common comedications for the psychiatric population were tested. For each analyte, the analytical procedure was validated to cover the common range of concentrations measured in plasma samples: 1-400ng/mL for reboxetine and bupropion, 2-2000ng/mL for hydroxy-bupropion, moclobemide, and trazodone. For all investigated compounds, reliable performance in terms of accuracy, precision, trueness, recovery, selectivity and stability was obtained. One year after its implementation in a routine process, this method demonstrated a high robustness with accurate values over the wide concentration range commonly observed among a psychiatric population.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New methods and devices for pursuing performance enhancement through altitude training were developed in Scandinavia and the USA in the early 1990s. At present, several forms of hypoxic training and/or altitude exposure exist: traditional 'live high-train high' (LHTH), contemporary 'live high-train low' (LHTL), intermittent hypoxic exposure during rest (IHE) and intermittent hypoxic exposure during continuous session (IHT). Although substantial differences exist between these methods of hypoxic training and/or exposure, all have the same goal: to induce an improvement in athletic performance at sea level. They are also used for preparation for competition at altitude and/or for the acclimatization of mountaineers. The underlying mechanisms behind the effects of hypoxic training are widely debated. Although the popular view is that altitude training may lead to an increase in haematological capacity, this may not be the main, or the only, factor involved in the improvement of performance. Other central (such as ventilatory, haemodynamic or neural adaptation) or peripheral (such as muscle buffering capacity or economy) factors play an important role. LHTL was shown to be an efficient method. The optimal altitude for living high has been defined as being 2200-2500 m to provide an optimal erythropoietic effect and up to 3100 m for non-haematological parameters. The optimal duration at altitude appears to be 4 weeks for inducing accelerated erythropoiesis whereas <3 weeks (i.e. 18 days) are long enough for beneficial changes in economy, muscle buffering capacity, the hypoxic ventilatory response or Na(+)/K(+)-ATPase activity. One critical point is the daily dose of altitude. A natural altitude of 2500 m for 20-22 h/day (in fact, travelling down to the valley only for training) appears sufficient to increase erythropoiesis and improve sea-level performance. 'Longer is better' as regards haematological changes since additional benefits have been shown as hypoxic exposure increases beyond 16 h/day. The minimum daily dose for stimulating erythropoiesis seems to be 12 h/day. For non-haematological changes, the implementation of a much shorter duration of exposure seems possible. Athletes could take advantage of IHT, which seems more beneficial than IHE in performance enhancement. The intensity of hypoxic exercise might play a role on adaptations at the molecular level in skeletal muscle tissue. There is clear evidence that intense exercise at high altitude stimulates to a greater extent muscle adaptations for both aerobic and anaerobic exercises and limits the decrease in power. So although IHT induces no increase in VO(2max) due to the low 'altitude dose', improvement in athletic performance is likely to happen with high-intensity exercise (i.e. above the ventilatory threshold) due to an increase in mitochondrial efficiency and pH/lactate regulation. We propose a new combination of hypoxic method (which we suggest naming Living High-Training Low and High, interspersed; LHTLHi) combining LHTL (five nights at 3000 m and two nights at sea level) with training at sea level except for a few (2.3 per week) IHT sessions of supra-threshold training. This review also provides a rationale on how to combine the different hypoxic methods and suggests advances in both their implementation and their periodization during the yearly training programme of athletes competing in endurance, glycolytic or intermittent sports.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this commentary, we argue that the term 'prediction' is overly used when in fact, referring to foundational writings of de Finetti, the correspondent term should be inference. In particular, we intend (i) to summarize and clarify relevant subject matter on prediction from established statistical theory, and (ii) point out the logic of this understanding with respect practical uses of the term prediction. Written from an interdisciplinary perspective, associating statistics and forensic science as an example, this discussion also connects to related fields such as medical diagnosis and other areas of application where reasoning based on scientific results is practiced in societal relevant contexts. This includes forensic psychology that uses prediction as part of its vocabulary when dealing with matters that arise in the course of legal proceedings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among PET radiotracers, FDG seems to be quite accepted as an accurate oncology diagnostic tool, frequently helpful also in the evaluation of treatment response and in radiation therapy treatment planning for several cancer sites. To the contrary, the reliability of Choline as a tracer for prostate cancer (PC) still remains an object of debate for clinicians, including radiation oncologists. This review focuses on the available data about the potential impact of Choline-PET in the daily clinical practice of radiation oncologists managing PC patients. In summary, routine Choline-PET is not indicated for initial local T staging, but it seems better than conventional imaging for nodal staging and for all patients with suspected metastases. In these settings, Choline-PET showed the potential to change patient management. A critical limit remains spatial resolution, limiting the accuracy and reliability for small lesions. After a PSA rise, the problem of the trigger PSA value remains crucial. Indeed, the overall detection rate of Choline-PET is significantly increased when the trigger PSA, or the doubling time, increases, but higher PSA levels are often a sign of metastatic spread, a contraindication for potentially curable local treatments such as radiation therapy. Even if several published data seem to be promising, the current role of PET in treatment planning in PC patients to be irradiated still remains under investigation. Based on available literature data, all these issues are addressed and discussed in this review.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A critical feature of cooperative animal societies is the reproductive skew, a shorthand term for the degree to which a dominant individual monopolizes overall reproduction in the group. Our theoretical analysis of the evolutionarily stable skew in matrifilial (i.e., mother-daughter) societies, in which relatednesses to offspring are asymmetrical, predicts that reproductive skews in such societies should tend to be greater than those of semisocial societies (i.e., societies composed of individuals of the same generation, such as siblings), in which relatednesses to offspring are symmetrical. Quantitative data on reproductive skews in semisocial and matrifilial associations within the same species for 17 eusocial Hymenoptera support this prediction. Likewise, a survey of reproductive partitioning within 20 vertebrate societies demonstrates that complete reproductive monopoly is more likely to occur in matrifilial than in semisocial societies, also as predicted by the optimal skew model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Multiple electrode aggregometry (MEA) is a point-of-care test evaluating platelet function and the efficacy of platelet inhibitors. In MEA, electrical impedance of whole blood is measured after addition of a platelet activator. Reduced impedance implies platelet dysfunction or the presence of platelet inhibitors. MEA plays an increasingly important role in the management of perioperative platelet dysfunction. In vitro, midazolam, propofol, lidocaine and magnesium have known antiplatelet effects and these may interfere with MEA interpretation. OBJECTIVE: To evaluate the extent to which MEA is modified in the presence of these drugs. DESIGN: An in-vitro study using blood collected from healthy volunteers. SETTING: Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland, 2010 to 2011. PATIENTS: Twenty healthy volunteers. INTERVENTION: Measurement of baseline MEA was using four activators: arachidonic acid, ADP, TRAP-6 and collagen. The study drugs were then added in three increasing, clinically relevant concentrations. MAIN OUTCOME MEASURE: MEA was compared with baseline for each study drug. RESULTS: Midazolam, propofol and lidocaine showed no effect on MEA at any concentration. Magnesium at 2.5 mmol l had a significant effect on the ADP and TRAP tests (31 ± 13 and 96 ± 39 AU, versus 73 ± 21 and 133 ± 28 AU at baseline, respectively), and a less pronounced effect at 1 mmol l on the ADP test (39 ± 0 AU). CONCLUSION: Midazolam, propofol and lidocaine do not interfere with MEA measurement. In patients treated with high to normal doses of magnesium, MEA results for ADP and TRAP-tests should be interpreted with caution. TRIAL REGISTRATION: Clinicaltrials.gov (no. NCT01454427).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this paper is to discuss whether children have a capacity for deonticreasoning that is irreducible to mentalizing. The results of two experiments point tothe existence of such non-mentalistic understanding and prediction of the behaviourof others. In Study 1, young children (3- and 4-year-olds) were told different versionsof classic false-belief tasks, some of which were modified by the introduction of a ruleor a regularity. When the task (a standard change of location task) included a rule, theperformance of 3-year-olds, who fail traditional false-belief tasks, significantly improved.In Study 2, 3-year-olds proved to be able to infer a rule from a social situation and touse it in order to predict the behaviour of a character involved in a modified versionof the false-belief task. These studies suggest that rules play a central role in the socialcognition of young children and that deontic reasoning might not necessarily involvemind reading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stoppa-Vaucher S, Ayabe T, Paquette J, Patey N, Francoeur D, Vuissoz J-M, Deladoëy J, Samuels ME, Ogata T, Deal CL. 46, XY gonadal dysgenesis: new SRY point mutation in two siblings with paternal germ line mosaicism. Familial recurrence risks are poorly understood in cases of de novo mutations. In the event of parental germ line mosaicism, recurrence risks can be higher than generally appreciated, with implications for genetic counseling and clinical practice. In the course of treating a female with pubertal delay and hypergonadotropic hypogonadism, we identified a new missense mutation in the SRY gene, leading to somatic feminization of this karyotypically normal XY individual. We tested a younger sister despite a normal onset of puberty, who also possessed an XY karyotype and the same SRY mutation. Imaging studies in the sister revealed an ovarian tumor, which was removed. DNA from the father's blood possessed the wild type SRY sequence, and paternity testing was consistent with the given family structure. A brother was 46, XY with a wild type SRY sequence strongly suggesting paternal Y-chromosome germline mosaicism for the mutation. In disorders of sexual development (DSDs), early diagnosis is critical for optimal psychological development of the affected patients. In this case, preventive karyotypic screening allowed early diagnosis of a gonadal tumor in the sibling prior to the age of normal puberty. Our results suggest that cytological or molecular diagnosis should be applied for siblings of an affected DSD individual.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.