91 resultados para methodic-didactic considerations on the work with TV news
em Université de Lausanne, Switzerland
Resumo:
The ASTM standards on Writing Ink Identification (ASTM 1789-04) and on Writing Ink Comparison (ASTM 1422-05) are the most up-to-date guidelines that have been published on the forensic analysis of ink. The aim of these documents is to cover most aspects of the forensic analysis of ink evidence, from the analysis of ink samples, the comparison of the analytical profile of these samples (with the aim to differentiate them or not), through to the interpretation of the result of the examination of these samples in a forensic context. Significant evolutions in the technology available to forensic scientists, in the quality assurance requirements brought onto them, and in the understanding of frameworks to interpret forensic evidence have been made in recent years. This article reviews the two standards in the light of these evolutions and proposes some practical improvements in terms of the standardization of the analyses, the comparison of ink samples, and the interpretation of ink examination. Some of these suggestions have already been included in a DHS funded project aimed at creating a digital ink library for the United States Secret Service.
Resumo:
Background: As imatinib pharmacokinetics are highly variable, plasma levels differ largely between patients under the same dosage. Retrospective studies in chronic myeloid leukemia (CML) patients showed significant correlations between low levels and suboptimal response, and between high levels and poor tolerability. Monitoring of plasma levels is thus increasingly advised, targeting trough concentrations of 1000 μg/L and above. Objectives: Our study was launched to assess the clinical usefulness of systematic imatinib TDM in CML patients. The present preliminary evaluation questions the appropriateness of dosage adjustment following plasma level measurement to reach the recommended trough level, while allowing an interval of 4-24 h after last drug intake for blood sampling. Methods: Initial blood samples from the first 9 patients in the intervention arm were obtained 4-25 h after last dose. Trough levels in 7 patients were predicted to be significantly away from the target (6 <750 μg/L, and 1 >1500 μg/L with poor tolerance), based on a Bayesian approach using a population pharmacokinetic model. Individual dosage adjustments were taken up in 5 patients, who had a control measurement 1-4 weeks after dosage change. Predicted trough levels were confronted to anterior model-based extrapolations. Results: Before dosage adjustment, observed concentrations extrapolated at trough ranged from 359 to 1832 μg/L (median 710; mean 804, CV 53%) in the 9 patients. After dosage adjustment they were expected to target between 720 and 1090 μg/L (median 878; mean 872, CV 13%). Observed levels of the 5 recheck measurements extrapolated at trough actually ranged from 710 to 1069 μg/L (median 1015; mean 950, CV 16%) and had absolute differences of 21 to 241 μg/L to the model-based predictions (median 175; mean 157, CV 52%). Differences between observed and predicted trough levels were larger when intervals between last drug intake and sampling were very short (~4 h). Conclusion: These preliminary results suggest that TDM of imatinib using a Bayesian interpretation is able to bring trough levels closer to 1000 μg/L (with CV decreasing from 53% to 16%). While this may simplify blood collection in daily practice, as samples do not have to be drawn exactly at trough, the largest possible interval to last drug intake yet remains preferable. This encourages the evaluation of the clinical benefit of a routine TDM intervention in CML patients, which the randomized Swiss I-COME study aims to.
Resumo:
INTRODUCTION: In November 2009, the "3rd Summit on Osteoporosis-Central and Eastern Europe (CEE)" was held in Budapest, Hungary. The conference aimed to tackle issues regarding osteoporosis management in CEE identified during the second CEE summit in 2008 and to agree on approaches that allow most efficient and cost-effective diagnosis and therapy of osteoporosis in CEE countries in the future. DISCUSSION: The following topics were covered: past year experience from FRAX® implementation into local diagnostic algorithms; causes of secondary osteoporosis as a FRAX® risk factor; bone turnover markers to estimate bone loss, fracture risk, or monitor therapies; role of quantitative ultrasound in osteoporosis management; compliance and economical aspects of osteoporosis; and osteoporosis and genetics. Consensus and recommendations developed on these topics are summarised in the present progress report. CONCLUSION: Lectures on up-to-date data of topical interest, the distinct regional provenances of the participants, a special focus on practical aspects, intense mutual exchange of individual experiences, strong interest in cross-border cooperations, as well as the readiness to learn from each other considerably contributed to the establishment of these recommendations. The "4th Summit on Osteoporosis-CEE" held in Prague, Czech Republic, in December 2010 will reveal whether these recommendations prove of value when implemented in the clinical routine or whether further improvements are still required.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
Purpose: The increase of apparent diffusion coefficient (ADC) in treated hepatic malignancies compared to pre-therapeutic values has been interpreted as treatment success; however, the variability of ADC measurements remains unknown. Furthermore, ADC has been usually measured in the whole lesion, while measurements should be probably centered on the area with the most restricted diffusion (MRDA) as it represents potential tumoral residue. Our objective was to compare the inter/intraobserver variability of ADC measurements in the whole lesion and in MRDA. Material and methods: Forty patients previously treated with chemoembolization or radiofrequency were evaluated (20 on 1.5T and 20 on 3.0T). After consensual agreement on the best ADC image, two readers measured the ADC values using separate regions of interest that included the whole lesion and the whole MRDA without exceeding their borders. The same measurements were repeated two weeks later. Spearman test and the Bland-Altman method were used. Results: Interobserver correlation in ADC measurements in the whole lesion and MRDA was as follows: 0.962 and 0.884. Intraobserver correlation was, respectively, 0.992 and 0.979. Interobserver limits of variability (mm2/sec*10-3) were between -0.25/+0.28 in the whole lesion and between -0.51/+0.46 in MRDA. Intraobserver limits of variability were, respectively: -0.25/+0.24 and -0.43/+0.47. Conclusion: We observed a good inter/intraobserver correlation in ADC measurements. Nevertheless, a limited variability does exist, and it should be considered when interpreting ADC values of hepatic malignancies.
Resumo:
For the general practitioner to be able to prescribe optimal therapy to his individual hypertensive patients, he needs accurate information on the therapeutic agents he is going to administer and practical treatment strategies. The information on drugs and drug combinations has to be applicable to the treatment of individual patients and not just patient study groups. A basic requirement is knowledge of the dose-response relationship for each compound in order to choose the optimal therapeutic dose. Contrary to general assumption, this key information is difficult to obtain and often not available to the physician for many years after marketing of a drug. As a consequence, excessive doses are often used. Furthermore, the physician needs comparative data on the various antihypertensive drugs that are applicable to the treatment of individual patients. In order to minimize potential side effects due to unnecessary combinations of compounds, the strategy of sequential monotherapy is proposed, with the goal of treating as many patients as possible with monotherapy at optimal doses. More drug trials of a crossover design and more individualized analyses of the results are badly needed to provide the physician with information that he can use in his daily practice. In this time of continuous intensive development of new antihypertensive agents, much could be gained in enhanced efficacy and reduced incidence of side effects by taking a closer look at the drugs already available and using them more appropriately in individual patients.
Resumo:
In the past few decades, the rise of criminal, civil and asylum cases involving young people lacking valid identification documents has generated an increase in the demand of age estimation. The chronological age or the probability that an individual is older or younger than a given age threshold are generally estimated by means of some statistical methods based on observations performed on specific physical attributes. Among these statistical methods, those developed in the Bayesian framework allow users to provide coherent and transparent assignments which fulfill forensic and medico-legal purposes. The application of the Bayesian approach is facilitated by using probabilistic graphical tools, such as Bayesian networks. The aim of this work is to test the performances of the Bayesian network for age estimation recently presented in scientific literature in classifying individuals as older or younger than 18 years of age. For these exploratory analyses, a sample related to the ossification status of the medial clavicular epiphysis available in scientific literature was used. Results obtained in the classification are promising: in the criminal context, the Bayesian network achieved, on the average, a rate of correct classifications of approximatively 97%, whilst in the civil context, the rate is, on the average, close to the 88%. These results encourage the continuation of the development and the testing of the method in order to support its practical application in casework.
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
BACKGROUND: Many emergency department (ED) providers do not follow guideline recommendations for the use of the pneumonia severity index (PSI) to determine the initial site of treatment for patients with community-acquired pneumonia (CAP). We identified the reasons why ED providers hospitalize low-risk patients or manage higher-risk patients as outpatients. METHODS: As a part of a trial to implement a PSI-based guideline for the initial site of treatment of patients with CAP, we analyzed data for patients managed at 12 EDs allocated to a high-intensity guideline implementation strategy study arm. The guideline recommended outpatient care for low-risk patients (nonhypoxemic patients with a PSI risk classification of I, II, or III) and hospitalization for higher-risk patients (hypoxemic patients or patients with a PSI risk classification of IV or V). We asked providers who made guideline-discordant decisions on site of treatment to detail the reasons for nonadherence to guideline recommendations. RESULTS: There were 1,306 patients with CAP (689 low-risk patients and 617 higher-risk patients). Among these patients, physicians admitted 258 (37.4%) of 689 low-risk patients and treated 20 (3.2%) of 617 higher-risk patients as outpatients. The most commonly reported reasons for admitting low-risk patients were the presence of a comorbid illness (178 [71.5%] of 249 patients); a laboratory value, vital sign, or symptom that precluded ED discharge (73 patients [29.3%]); or a recommendation from a primary care or a consulting physician (48 patients [19.3%]). Higher-risk patients were most often treated as outpatients because of a recommendation by a primary care or consulting physician (6 [40.0%] of 15 patients). CONCLUSION: ED providers hospitalize many low-risk patients with CAP, most frequently for a comorbid illness. Although higher-risk patients are infrequently treated as outpatients, this decision is often based on the request of an involved physician.
Resumo:
Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.
Resumo:
Background: CMR has recently emerged as a robust and reliable technique to assess coronary artery disease (CAD). A negative perfusion CMR test predicts low event rates of 0.3-0.5%/year. Invasive coronary angiography (CA) remains the "gold standard" for the evaluation of CAD in many countries.Objective: Assessing the costs of the two strategies in the European CMR registry for the work-up of known or suspected CAD from a health care payer perspective. Strategy 1) a CA to all patients or 2) a CA only to patients who are diagnosed positive for ischemia in a prior CMR.Method and results: Using data of the European CMR registry (20 hospitals, 11'040 consecutive patients) we calculated the proportion of patients who were diagnosed positive (20.6%), uncertain (6.5%), and negative (72.9%) after the CMR test in patients with known or suspected CAD (n=2'717). No other medical test was performed to patients who were negative for ischemia. Positive diagnosed patients had a coronary angiography. Those with uncertain diagnosis had additional tests (84.7%: stress echocardiography, 13.1%: CCT, 2.3% SPECT), these costs were added to the CMR strategy costs. Information from costs for tests in Germany and Switzerland were used. A sensibility analysis was performed for inpatient CA. For costs see figure. Results - costs.Discussion: The CMR strategy costs less than the CA strategy for the health insurance systems both, in Germany and Switzerland. While lower in costs, the CMR strategy is a non-invasive one, does not expose to radiation, and yields additional information on cardiac function, viability, valves, and great vessels. Developing the use of CMR instead of CA might imply some reduction in costs together with superior patient safety and comfort, and a better utilization of resources at the hospital level. Document introduit le : 01.12.2011