120 resultados para expected utility theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To evaluate the feasibility, determine the optimal b-value, and assess the utility of 3-T diffusion-weighted MR imaging (DWI) of the spine in differentiating benign from pathologic vertebral compression fractures.Methods and Materials: Twenty patients with 38 vertebral compression fractures (24 benign, 14 pathologic) and 20 controls (total: 23 men, 17 women, mean age 56.2years) were included from December 2010 to May 2011 in this IRB-approved prospective study. MR imaging of the spine was performed on a 3-T unit with T1-w, fat-suppressed T2-w, gadolinium-enhanced fat-suppressed T1-w and zoomed-EPI (2D RF excitation pulse combined with reduced field-of-view single-shot echo-planar readout) diffusion-w (b-values: 0, 300, 500 and 700s/mm2) sequences. Two radiologists independently assessed zoomed-EPI image quality in random order using a 4-point scale: 1=excellent to 4=poor. They subsequently measured apparent diffusion coefficients (ADCs) in normal vertebral bodies and compression fractures, in consensus.Results: Lower b-values correlated with better image quality scores, with significant differences between b=300 (mean±SD=2.6±0.8), b=500 (3.0±0.7) and b=700 (3.6±0.6) (all p<0.001). Mean ADCs of normal vertebral bodies (n=162) were 0.23, 0.17 and 0.11×10-3mm2/s with b=300, 500 and 700s/mm2, respectively. In contrast, mean ADCs were 0.89, 0.70 and 0.59×10-3mm2/s for benign vertebral compression fractures and 0.79, 0.66 and 0.51×10-3mm2/s for pathologic fractures with b=300, 500 and 700s/mm2, respectively. No significant difference was found between ADCs of benign and pathologic fractures.Conclusion: 3-T DWI of the spine is feasible and lower b-values (300s/mm2) are recommended. However, our preliminary results show no advantage of DWI in differentiating benign from pathologic vertebral compression fractures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

What genotype should the scientist specify for conducting a database search to try to find the source of a low-template-DNA (lt-DNA) trace? When the scientist answers this question, he or she makes a decision. Here, we approach this decision problem from a normative point of view by defining a decision-theoretic framework for answering this question for one locus. This framework combines the probability distribution describing the uncertainty over the trace's donor's possible genotypes with a loss function describing the scientist's preferences concerning false exclusions and false inclusions that may result from the database search. According to this approach, the scientist should choose the genotype designation that minimizes the expected loss. To illustrate the results produced by this approach, we apply it to two hypothetical cases: (1) the case of observing one peak for allele xi on a single electropherogram, and (2) the case of observing one peak for allele xi on one replicate, and a pair of peaks for alleles xi and xj, i ≠ j, on a second replicate. Given that the probabilities of allele drop-out are defined as functions of the observed peak heights, the threshold values marking the turning points when the scientist should switch from one designation to another are derived in terms of the observed peak heights. For each case, sensitivity analyses show the impact of the model's parameters on these threshold values. The results support the conclusion that the procedure should not focus on a single threshold value for making this decision for all alleles, all loci and in all laboratories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sampling issues represent a topic of ongoing interest to the forensic science community essentially because of their crucial role in laboratory planning and working protocols. For this purpose, forensic literature described thorough (Bayesian) probabilistic sampling approaches. These are now widely implemented in practice. They allow, for instance, to obtain probability statements that parameters of interest (e.g., the proportion of a seizure of items that present particular features, such as an illegal substance) satisfy particular criteria (e.g., a threshold or an otherwise limiting value). Currently, there are many approaches that allow one to derive probability statements relating to a population proportion, but questions on how a forensic decision maker - typically a client of a forensic examination or a scientist acting on behalf of a client - ought actually to decide about a proportion or a sample size, remained largely unexplored to date. The research presented here intends to address methodology from decision theory that may help to cope usefully with the wide range of sampling issues typically encountered in forensic science applications. The procedures explored in this paper enable scientists to address a variety of concepts such as the (net) value of sample information, the (expected) value of sample information or the (expected) decision loss. All of these aspects directly relate to questions that are regularly encountered in casework. Besides probability theory and Bayesian inference, the proposed approach requires some additional elements from decision theory that may increase the efforts needed for practical implementation. In view of this challenge, the present paper will emphasise the merits of graphical modelling concepts, such as decision trees and Bayesian decision networks. These can support forensic scientists in applying the methodology in practice. How this may be achieved is illustrated with several examples. The graphical devices invoked here also serve the purpose of supporting the discussion of the similarities, differences and complementary aspects of existing Bayesian probabilistic sampling criteria and the decision-theoretic approach proposed throughout this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of sex allocation in social Hymenoptera (ants, bees, and wasps) provides an excellent opportunity for testing kin-selection theory and studying conflict resolution. A queen-worker conflict over sex allocation is expected because workers are more related to sisters than to brothers, whereas queens are equally related to daughters and sons. If workers fully control sex allocation, split sex ratio theory predicts that colonies with relatively high or low relatedness asymmetry (the relatedness of workers to females divided by the relatedness of workers to males) should specialize in females or males, respectively. We performed a meta-analysis to assess the magnitude of adaptive sex allocation biasing by workers and degree of support for split sex ratio theory in the social Hymenoptera. Overall, variation in relatedness asymmetry (due to mate number or queen replacement) and variation in queen number (which also affects relatedness asymmetry in some conditions) explained 20.9% and 5% of the variance in sex allocation among colonies, respectively. These results show that workers often bias colony sex allocation in their favor as predicted by split sex ratio theory, even if their control is incomplete and a large part of the variation among colonies has other causes. The explanatory power of split sex ratio theory was close to that of local mate competition and local resource competition in the few species of social Hymenoptera where these factors apply. Hence, three of the most successful theories explaining quantitative variation in sex allocation are based on kin selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An African oxalogenic tree, the iroko tree (Milicia excelsa), has the property to enhance carbonate precipitation in tropical oxisols, where such accumulations are not expected due to the acidic conditions in these types of soils. This uncommon process is linked to the oxalate-carbonate pathway, which increases soil pH through oxalate oxidation. In order to investigate the oxalate-carbonate pathway in the iroko system, fluxes of matter have been identified, described, and evaluated from field to microscopic scales. In the first centimeters of the soil profile, decaying of the organic matter allows the release of whewellite crystals, mainly due to the action of termites and saprophytic fungi. In addition, a concomitant flux of carbonate formed in wood tissues contributes to the carbonate flux and is identified as a direct consequence of wood feeding by termites. Nevertheless, calcite biomineralization of the tree is not a consequence of in situ oxalate consumption, but rather related to the oxalate oxidation inside the upper part of the soil. The consequence of this oxidation is the presence of carbonate ions in the soil solution pumped through the roots, leading to preferential mineralization of the roots and the trunk base. An ideal scenario for the iroko biomineralization and soil carbonate accumulation starts with oxalatization: as the iroko tree grows, the organic matter flux to the soil constitutes the litter, and an oxalate pool is formed on the forest ground. Then, wood rotting agents (mainly termites, saprophytic fungi, and bacteria) release significant amounts of oxalate crystals from decaying plant tissues. In addition, some of these agents are themselves producers of oxalate (e.g. fungi). Both processes contribute to a soil pool of "available" oxalate crystals. Oxalate consumption by oxalotrophic bacteria can then start. Carbonate and calcium ions present in the soil solution represent the end products of the oxalate-carbonate pathway. The solution is pumped through the roots, leading to carbonate precipitation. The main pools of carbon are clearly identified as the organic matter (the tree and its organic products), the oxalate crystals, and the various carbonate features. A functional model based on field observations and diagenetic investigations with δ13C signatures of the various compartments involved in the local carbon cycle is proposed. It suggests that the iroko ecosystem can act as a long-term carbon sink, as long as the calcium source is related to non-carbonate rocks. Consequently, this carbon sink, driven by the oxalate carbonate pathway around an iroko tree, constitutes a true carbon trapping ecosystem as defined by ecological theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In principle, we should be glad that Eric Kmiec and his colleagues published in Science's STKE (1) a detailed experimental protocol of their gene repair method (2, 3). However, a careful reading of their contribution raises more doubts about the method. The research published in Science five years ago by Kmiec and his colleagues was said to demonstrate that chimeric RNA-DNA oligonucleotides could correct the mutation responsible for sickle cell anemia with 50% efficiency (4). Such a remarkable result prompted many laboratories to attempt to replicate the research or utilize the method on their own systems. However, if the method worked at all, which it rarely did, the achieved efficiency was usually lower by several orders of magnitude. Now, in the Science's STKE protocol, we are given crucial information about the method and why it is so important to utilize these expensive chimeric RNA-DNA constructs. In the introduction we are told that the RNA-DNA duplex is more stable than a DNA-DNA duplex and so extends the half-life of the complexes formed between the targeted DNA and the chimeric RNA-DNA oligonucleotides. This logical explanation, however, conflicts with the statement in the section entitled "Transfection with Oligonucleotides and Plasmid DNA" that Kmiec and colleagues have recently demonstrated that classical single-stranded DNA oligonucleotides with a few protective phosphothioate linkages have a "gene repair conversion frequency rivaling that of the RNA/DNA chimera". Indeed, the research cited for that result actually states that single-stranded DNA oligonucleotides are in fact several-fold more efficient (3.7-fold) than the RNA-DNA chimeric constructs (5). If that is the case, it raises the question of why Kmiec and colleagues emphasize the importance of the RNA in their original chimeric constructs. Their own new results show that modified single-stranded DNA oligonucleotides are more effective than the expensive RNA-DNA hybrids. Moreover, the current efficiency of the gene repair by RNA-DNA hybrids, according to Kmiec and colleagues in their recent paper is only 4×10-4 even after several hours of pre-selection permitting multiplification of bacterial cells with the corrected plasmid (5). This efficiency is much lower than the 50% value reported five years ago, but is assuredly much closer to the reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A growing number of studies have been addressing the relationship between theory of mind (TOM) and executive functions (EF) in patients with acquired neurological pathology. In order to provide a global overview on the main findings, we conducted a systematic review on group studies where we aimed to (1) evaluate the patterns of impaired and preserved abilities of both TOM and EF in groups of patients with acquired neurological pathology and (2) investigate the existence of particular relations between different EF domains and TOM tasks. The search was conducted in Pubmed/Medline. A total of 24 articles met the inclusion criteria. We considered for analysis classical clinically accepted TOM tasks (first- and second-order false belief stories, the Faux Pas test, Happe's stories, the Mind in the Eyes task, and Cartoon's tasks) and EF domains (updating, shifting, inhibition, and access). The review suggests that (1) EF and TOM appear tightly associated. However, the few dissociations observed suggest they cannot be reduced to a single function; (2) no executive subprocess could be specifically associated with TOM performances; (3) the first-order false belief task and the Happe's story task seem to be less sensitive to neurological pathologies and less associated to EF. Even though the analysis of the reviewed studies demonstrates a close relationship between TOM and EF in patients with acquired neurological pathology, the nature of this relationship must be further investigated. Studies investigating ecological consequences of TOM and EF deficits, and intervention researches may bring further contributions to this question.