974 resultados para Linear decision rules
Resumo:
An important statistical development of the last 30 years has been the advance in regression analysis provided by generalized linear models (GLMs) and generalized additive models (GAMs). Here we introduce a series of papers prepared within the framework of an international workshop entitled: Advances in GLMs/GAMs modeling: from species distribution to environmental management, held in Riederalp, Switzerland, 6-11 August 2001.We first discuss some general uses of statistical models in ecology, as well as provide a short review of several key examples of the use of GLMs and GAMs in ecological modeling efforts. We next present an overview of GLMs and GAMs, and discuss some of their related statistics used for predictor selection, model diagnostics, and evaluation. Included is a discussion of several new approaches applicable to GLMs and GAMs, such as ridge regression, an alternative to stepwise selection of predictors, and methods for the identification of interactions by a combined use of regression trees and several other approaches. We close with an overview of the papers and how we feel they advance our understanding of their application to ecological modeling.
Resumo:
Over the last decade, diagnostic options and introduction of novel treatments have expanded the armamentarium in the management of malignant glioma. Combined chemoradiotherapy has become the standard of care in glioblastoma up to the age of 70 years, while treatment in elderly patients or with lower grade glioma is less well defined. Molecular markers define different disease subtypes and allow for adapted treatment selection. This review focuses on simple questions arising in the daily management of patients.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
The purpose of this chapter is to implement Iowa Code chapter 316 and sections 6B.42, 6B.45, 6B.54 and 6B.55, as required by the Uniform Relocation Assistance and Real Property Acquisition Policies Act of 1970, Pub. L. 91-646, as amended by the Uniform Relocation Act Amendments of 1987, Title IV, Pub. L. No. 100-17 , Sec. 104, Pub. L. 105-117, and federal regulations adopted pursuant thereto.
Resumo:
Can rules be used to shield public resources from political interference? The Brazilian constitution and national tax code stipulate that revenue sharing transfers to municipal governments be determined by the size of counties in terms of estimated population. In this paper I document that the population estimates which went into the transfer allocation formula for the year 1991 were manipulated, resulting in significant transfer differentials over the entire 1990's. I test whether conditional on county characteristics that might account for the manipulation, center-local party alignment, party popularity and the extent of interparty fragmentation at the county level are correlated with estimated populations in 1991. Results suggest that revenue sharing transfers were targeted at right-wing national deputies in electorally fragmented counties as well as aligned local executives.
Resumo:
Résumé Ce travail vise à clarifier les résultats contradictoires de la littérature concernant les besoins des patients d'être informés et de participer à la prise de décision. La littérature insiste sur le contenu de l'information comme base de la prise de décision, bien qu'il existe des preuves que d'autres contenus sont importants pour les patients. La thèse essaie en outre d'identifier des possibilités de mieux répondre aux préférences d'information et de participation des patients. Les travaux ont porté en particulier sur les soins palliatifs. Une analyse de la littérature donne un aperçu sur les soins palliatifs, sur l'information des patients et sur leur participation à la prise de décisions thérapeutiques. Cette analyse résume les résultats d'études précédentes et propose un: modèle théorique d'information, de prise de décision et de relation entre ces deux domaines. Dans le cadre de ce travail, deux études empiriques ont utilisé des questionnaires écrits adressés à des personnes privées et à des professionnels de la santé, couvrant la Suisse et le Royaume Uni, pour identifier d'éventuelles différences entre ces deux pays. Les enquêtes ont été focalisées sur des patients souffrant de cancer du poumon. Les instruments utilisés pour ces études proviennent de la littérature afin de les rendre comparables. Le taux de réponse aux questionnaires était de 30-40%. La majorité des participants aux enquêtes estime que les patients devraient: - collaborer à la prise de décision quant à leur traitement - recevoir autant d'information que possible, positive aussi bien que négative - recevoir toutes les informations mentionnées dans le questionnaire (concernant la maladie, le diagnostic et les traitements), tenant compte de la diversité des priorités des patients - être soutenus par des professionnels de la santé, leur famille, leurs amis et/ou les personnes souffrant de la même maladie En plus, les participants aux enquêtes ont identifié divers contenus de l'information aux patients souffrant d'une maladie grave. Ces contenus comprennent entre autres: - L'aide à la prise de décision concernant le traitement - la possibilité de maintenir le contrôle de la situation - la construction d'une relation entre le patient et le soignant - l'encouragement à faire des projets d'avenir - l'influence de l'état émotionnel - l'aide à la compréhension de la maladie et de son impact - les sources potentielles d'états confusionnels et d'états anxieux La plupart des contenus proposés sont positifs. Les résultats suggèrent la coexistence possible de différents contenus à un moment donné ainsi que leur changement au cours du temps. Un modèle est ensuite développé et commenté pour présenter le diagnostic d'une maladie grave. Ce modèle est basé sur la littérature et intègre les résultats des études empiriques réalisées dans le cadre de ce travail. Ce travail analyse également les sources préférées d'information et de soutien, facteurs qui peuvent influencer ou faire obstacle aux préférences d'information et de participation. Les deux groupes de participants considèrent les médecins spécialistes comme la meilleure source d'information. En ce qui concerne le soutien, les points de vue divergent entre les personnes privées et les professionnels de la santé: généralement, les rôles de soutien semblent peu définis parmi les professionnels. Les barrières à l'information adéquate du patient apparaissent fréquemment liées aux caractéristiques des professionnels et aux problèmes d'organisation. Des progrès dans ce domaine contribueraient à améliorer les soins fournis aux patients. Finalement, les limites des études empiriques sont discutées. Celles-ci comprennent, entre autres, la représentativité restreinte des participants et les objections de certains groupes de participants à quelques détails des questionnaires. Summary The present thesis follows a call from the current body of literature to better understand patient needs for information and for participation in decision-making, as previous research findings had been contradictory. Information so far seems to have been considered essentially as a means to making treatment decisions, despite certain evidence that it may have a number of other values to patients. Furthermore, the thesis aims to identify ways to optimise meeting patient preferences for information and participation in treatment decisions. The current field of interest is palliative care. An extensive literature review depicts the background of current concepts of palliative care, patient information and patient involvement into treatment decisions. It also draws together results from previous studies and develops a theoretical model of information, decision-making, and the relationship between them. This is followed by two empirical studies collecting data from members of the general public and health care professionals by means of postal questionnaires. The professional study covers both Switzerland and the United Kingdom in order to identify possible differences between countries. Both studies focus on newly diagnosed lung cancer patients. The instruments used were taken from the literature to make them comparable. The response rate in both surveys was 30-40%, as expected -sufficient to allow stastical tests to be performed. A third study, addressed to lung cancer patients themselves, turned out to require too much time within the frame available. A majority of both study populations thought that patients should: - have a collaborative role in treatment-related decision-making -receive as much information as possible, good or bad - receive all types of information mentioned in the questionnaire (about illness, tests, and treatment), although priorities varied across the study populations - be supported by health professionals, family members, friends and/or others with the same illness Furthermore they identified various 'meanings' information may have to patients with a serious illness. These included: - being an aid in treatment-related decision-making - allowing control to be maintained over the situation - helping the patient-professional relationship to be constructed - allowing plans to be made - being positive for the patient's emotional state - helping the illness and its impact to be understood - being a source of anxiety - being a potential source of confusion to the patient Meanings were mostly positive. It was suggested that different meanings could co-exist at a given time and that they might change over time. A model of coping with the disclosure of a serious diagnosis is then developped. This model is based on existing models of coping with threatening events, as takeñ from the literature [ref. 77, 78], and integrates findings from the empirical studies. The thesis then analyses the remaining aspects apparent from the two surveys. These range from the identification of preferred information and support providers to factors influencing or impeding information and participation preferences. Specialist doctors were identified by both study populations as the best information providers whilst with regard to support provision views differed between the general public and health professionals. A need for better definition of supportive roles among health care workers seemed apparent. Barriers to information provision often seem related to health professional characteristics or organisational difficulties, and improvements in the latter field could well help optimising patient care. Finally, limitations of the studies are discussed, including questions of representativness of certain results and difficulties with or objections against questionnaire details by some groups of respondents.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.
The hematology laboratory in blood doping (bd): 2014 update on the athlete biological passport (APB)
Resumo:
Introduction: Blood doping (BD) is the use of Erythropoietic Stimulating Agents (ESAs) and/or transfusion to increase aerobic performance in athletes. Direct toxicologic techniques are insufficient to unmask sophisticated doping protocols. The Hematological module of the ABP (World Anti-Doping Agency), associates decision support technology and expert assessment to indirectly detect BD hematological effects. Methods: The ABP module is based on blood parameters, under strict pre-analytical and analytical rules for collection, storage and transport at 2-12°C, internal and external QC. Accuracy, reproducibility and interlaboratory harmonization fulfill forensic standard. Blood samples are collected in competition and out-ofcompetition. Primary parameters for longitudinal monitoring are: - hemoglobin (HGB); - reticulocyte percentage (RET); - OFF score, indicator of suppressed erythropoiesis, calculated as [HGB(g/L) * 60-√RET%]. Statistical calculation predicts individual expected limits by probabilistic inference. Secondary parameters are RBC, HCT, MCHC-MCH-MCV-RDW-IFR. ABP profiles flagged as atypical are review by experts in hematology, pharmacology, sports medicine or physiology, and classified as: - normal - suspect (to target) - likely due to BD - likely due to pathology. Results: Thousands of athletes worldwide are currently monitored. Since 2010, at least 35 athletes have been sanctioned and others are prosecuted on the sole basis of abnormal ABP, with a 240% increase of positivity to direct tests for ESA, thanks to improved targeting of suspicious athletes (WADA data). Specific doping scenarios have been identified by the Experts (Table and Figure). Figure. Typical HGB and RET profiles in two highly suspicious athletes. A. Sample 2: simultaneous increases in HGB and RET (likely ESA stimulation) in a male. B. Samples 3, 6 and 7: "OFF" picture, with high HGB and low RET in a female. Sample 10: normal HGB and increased RET (ESA or blood withdrawal). Conclusions: ABP is a powerful tool for indirect doping detection, based on the recognition of specific, unphysiological changes triggered by blood doping. The effect of factors of heterogeneity, such as sex and altitude, must also be considered. Schumacher YO, et al. Drug Test Anal 2012, 4:846-853. Sottas PE, et al. Clin Chem 2011, 57:969-976.