888 resultados para Mathematical argumentation
Resumo:
BACKGROUND The number of patients in need of second-line antiretroviral drugs is increasing in sub-Saharan Africa. We aimed to project the need of second-line antiretroviral therapy in adults in sub-Saharan Africa up to 2030. METHODS We developed a simulation model for HIV and applied it to each sub-Saharan African country. We used the WHO country intelligence database to estimate the number of adult patients receiving antiretroviral therapy from 2005 to 2014. We fitted the number of adult patients receiving antiretroviral therapy to observed estimates, and predicted first-line and second-line needs between 2015 and 2030. We present results for sub-Saharan Africa, and eight selected countries. We present 18 scenarios, combining the availability of viral load monitoring, speed of antiretroviral scale-up, and rates of retention and switching to second-line. HIV transmission was not included. FINDINGS Depending on the scenario, 8·7-25·6 million people are expected to receive antiretroviral therapy in 2020, of whom 0·5-3·0 million will be receiving second-line antiretroviral therapy. The proportion of patients on treatment receiving second-line therapy was highest (15·6%) in the scenario with perfect retention and immediate switching, no further scale-up, and universal routine viral load monitoring. In 2030, the estimated range of patients receiving antiretroviral therapy will remain constant, but the number of patients receiving second-line antiretroviral therapy will increase to 0·8-4·6 million (6·6-19·6%). The need for second-line antiretroviral therapy was two to three times higher if routine viral load monitoring was implemented throughout the region, compared with a scenario of no further viral load monitoring scale-up. For each monitoring strategy, the future proportion of patients receiving second-line antiretroviral therapy differed only minimally between countries. INTERPRETATION Donors and countries in sub-Saharan Africa should prepare for a substantial increase in the need for second-line drugs during the next few years as access to viral load monitoring improves. An urgent need exists to decrease the costs of second-line drugs. FUNDING World Health Organization, Swiss National Science Foundation, National Institutes of Health.
Resumo:
The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.
Resumo:
With the aim of understanding the mechanism of molecular evolution, mathematical problems on the evolutionary change of DNA sequences are studied. The problems studied and the results obtained are as follows: (1) Estimation of evolutionary distance between nucleotide sequences. Studying the pattern of nucleotide substitution for the case of unequal substitution rates, a new mathematical formula for estimating the average number of nucleotide substitutions per site between two homologous DNA sequences is developed. It is shown that this formula has a wider applicability than currently available formulae. A statistical method for estimating the number of nucleotide changes due to deletion and insertion is also developed. (2) Biases of the estimates of nucleotide substitutions obtained by the restriction enzyme method. The deviation of the estimate of nucleotide substitutions obtained by the restriction enzyme method from the true value is investigated theoretically. It is shown that the amount of the deviation depends on the nucleotides in the recognition sequence of the restriction enzyme used, unequal rates of substitution among different nucleotides, and nucleotide frequences, but the primary factor is the unequal rates of nucleotide substitution. When many different kinds of enzymes are used, however, the amount of average deviation is generally small. (3) Distribution of restriction fragment lengths. To see the effect of undetectable restriction fragments and fragment differences on the estimate of nucleotide differences, the theoretical distribution of fragment lengths is studied. This distribution depends on the type of restriction enzymes used as well as on the relative frequencies of four nucleotides. It is shown that undetectability of small fragments or fragment differences gives a serious underestimate of nucleotide substitutions when the length-difference method of estimation is used, but the extent of underestimation is small when the site-difference method is used. (4) Evolutionary relationships of DNA sequences in finite populations. A mathematical theory on the expected evolutionary relationships among DNA sequences (nucleons) randomly chosen from the same or different populations is developed under the assumption that the evolutionary change of nucleons is determined solely by mutation and random genetic drift. . . . (Author's abstract exceeds stipulated maximum length. Discontinued here with permission of author). UMI ^
Resumo:
The most common pattern of classroom discourse follows a three-part exchange of teacher initiation, student response, and teacher evaluation or follow-up (IRE/IRF) (Cazden, 2001). Although sometimes described as encouraging illusory understanding (Lemke, 1990), triadic exchanges can mediate meaning (Nassaji & Wells, 2000). This paper focuses on one case from a study of discursive practices of seven middle grades teachers identified for their expertise in mathematics instruction. The central result of the study was the development of a model to explain how teachers use discourse to mediate mathematical meaning in whole group instruction. Drawing on the model for analysis, thick descriptions of one teacher’s skillful orchestration of triadic exchanges that enhance student understanding of mathematics are presented.
Resumo:
In spite of the movement to turn political science into a real science, various mathematical methods that are now the staples of physics, biology, and even economics are thoroughly uncommon in political science, especially the study of civil war. This study seeks to apply such methods - specifically, ordinary differential equations (ODEs) - to model civil war based on what one might dub the capabilities school of thought, which roughly states that civil wars end only when one side’s ability to make war falls far enough to make peace truly attractive. I construct several different ODE-based models and then test them all to see which best predicts the instantaneous capabilities of both sides of the Sri Lankan civil war in the period from 1990 to 1994 given parameters and initial conditions. The model that the tests declare most accurate gives very accurate predictions of state military capabilities and reasonable short term predictions of cumulative deaths. Analysis of the model reveals the scale of the importance of rebel finances to the sustainability of insurgency, most notably that the number of troops required to put down the Tamil Tigers is reduced by nearly a full order of magnitude when Tiger foreign funding is stopped. The study thus demonstrates that accurate foresight may come of relatively simple dynamical models, and implies the great potential of advanced and currently unconventional non-statistical mathematical methods in political science.
Resumo:
Adolf Böhm
Resumo:
In epidemiology literature, it is often required to investigate the relationships between means where the levels of experiment are actually monotone sets forming a partition on the range of sampling values. With this need, the analysis of these group means is generally performed using classical analysis of variance (ANOVA). However, this method has never been challenged. In this dissertation, we will formulate and present our examination of its validity. First, the classical assumptions of normality and constant variance are not always true. Second, under the null hypothesis of equal means, the test statistic for the classical ANOVA technique is still valid. Third, when the hypothesis of equal means is rejected, the classical analysis techniques for hypotheses of contrasts are not valid. Fourth, under the alternative hypothesis, we can show that the monotone property of levels leads to the conclusion that the means are monotone. Fifth, we propose an appropriate method for handing the data in this situation. ^
Resumo:
Objectives. This paper seeks to assess the effect on statistical power of regression model misspecification in a variety of situations. ^ Methods and results. The effect of misspecification in regression can be approximated by evaluating the correlation between the correct specification and the misspecification of the outcome variable (Harris 2010).In this paper, three misspecified models (linear, categorical and fractional polynomial) were considered. In the first section, the mathematical method of calculating the correlation between correct and misspecified models with simple mathematical forms was derived and demonstrated. In the second section, data from the National Health and Nutrition Examination Survey (NHANES 2007-2008) were used to examine such correlations. Our study shows that comparing to linear or categorical models, the fractional polynomial models, with the higher correlations, provided a better approximation of the true relationship, which was illustrated by LOESS regression. In the third section, we present the results of simulation studies that demonstrate overall misspecification in regression can produce marked decreases in power with small sample sizes. However, the categorical model had greatest power, ranging from 0.877 to 0.936 depending on sample size and outcome variable used. The power of fractional polynomial model was close to that of linear model, which ranged from 0.69 to 0.83, and appeared to be affected by the increased degrees of freedom of this model.^ Conclusion. Correlations between alternative model specifications can be used to provide a good approximation of the effect on statistical power of misspecification when the sample size is large. When model specifications have known simple mathematical forms, such correlations can be calculated mathematically. Actual public health data from NHANES 2007-2008 were used as examples to demonstrate the situations with unknown or complex correct model specification. Simulation of power for misspecified models confirmed the results based on correlation methods but also illustrated the effect of model degrees of freedom on power.^
Resumo:
Researchers have long believed the concept of "excitement" in games to be subjective and difficult to measure. This paper presents the development of a mathematically computable index that measures this concept from the viewpoint of an audience. One of the key aspects of the index is the differential of the probability of "winning" before and after one specific "play" in a given game. If the probability of winning becomes very positive or negative by that play, then the audience will feel the game to be "exciting." The index makes a large contribution to the study of games and enables researchers to compare and analyze the "excitement" of various games. It may be applied to many fields especially the area of welfare economics, ranging from allocative efficiency to axioms of justice and equity.
A Mathematical Representation of "Excitement" in Games: A Contribution to the Theory of Game Systems
Resumo:
Researchers have long believed the concept of "excitement" in games to be subjective and difficult to measure. This paper presents the development of a mathematically computable index that measures the concept from the viewpoint of an audience and from that of a player. One of the key aspects of the index is the differential of the probability of "winning" before and after one specific "play" in a given game. The index makes a large contribution to the study of games and enables researchers to compare and analyze the “excitement” of various games. It may be applied in many fields, especially the area of welfare economics, and applications may range from those related to allocative efficiency to axioms of justice and equity.
Resumo:
A study on the manoeuvrability of a riverine support patrol vessel is made to derive a mathematical model and simulate maneuvers with this ship. The vessel is mainly characterized by both its wide-beam and the unconventional propulsion system, that is, a pump-jet type azimuthal propulsion. By processing experimental data and the ship characteristics with diverse formulae to find the proper hydrodynamic coefficients and propulsion forces, a system of three differential equations is completed and tuned to carry out simulations of the turning test. The simulation is able to accept variable speed, jet angle and water depth as input parameters and its output consists of time series of the state variables and a plot of the simulated path and heading of the ship during the maneuver. Thanks to the data of full-scale trials previously performed with the studied vessel, a process of validation was made, which shows a good fit between simulated and full-scale experimental results, especially on the turning diameter
Resumo:
Esta tesis investiga cuales son los parámetros más críticos que condicionan los resultados que obtienen en los ensayos de protección de peatones la flota Europea de vehículos, según la reglamentación europea de protección de peatones de 2003 (Directiva CE 2003/102) y el posterior Reglamento de 2009 (Reglamento CE 2009/78). En primer lugar se ha analizado el contexto de la protección de peatones en Europa, viendo la historia de las diferentes propuestas de procedimientos de ensayo así como los cambios (y las razones de los mismos) que han sufrido a lo largo del proceso de definición de la normativa Europea. Con la información disponible de más de 400 de estos ensayos se han desarrollado corredores de rigidez para los frontales de los diferentes segmentos de la flota de vehículos europea, siendo este uno de los resultados más relevantes de esta tesis. Posteriormente, esta tesis ha realizado un estudio accidentológico en detalle de los escenarios de atropello de peatones, identificando sus características más relevantes, los grupos de población con mayor riesgo y los tipos de lesiones más importantes que aparecen (en frecuencia y severidad), que han sentado las bases para analizar con modelos matemáticos hasta qué punto los métodos de ensayo propuestos realmente tienen estos factores en cuenta. Estos análisis no habrían sido posibles sin el desarrollo de las nuevas herramientas que se presentan en esta tesis, que permiten construir instantáneamente el modelo matemático de cualquier vehículo y cualquier peatón adulto para analizar su iteración. Así, esta tesis ha desarrollado una metodología rápida para desarrollar modelos matemáticos de vehículos a demanda, de cualquier marca y modelo y con las características geométricas y de rigidez deseados que permitan representarlo matemáticamente y del mismo modo, ha investigado cómo evoluciona el comportamiento del cuerpo humano durante el envejecimiento y ha implementado una funcionalidad de escalado en edad al modelo de peatón en multicuerpo de MADYMO (ya escalable en tamaño) para permitir modelar ad hoc cualquier peatón adulto (en género y edad). Finalmente, esta tesis también ha realizado, utilizando modelos de elementos finitos del cuerpo humano, diferentes estudios sobre la biomecánica de las lesiones más frecuentes de este tipo de accidentes, (en piernas y cabeza) con el objetivo de mejorar los procedimientos de ensayo para que predigan mejor el tipo de lesiones que se quieren evitar. Con el marco temporal y las condiciones de contorno de esta tesis se han centrado los esfuerzos en reforzar algunos aspectos críticos pero puntuales sobre cómo mejorar el ensayo de cabeza y, sobretodo, en proponer soluciones viables y con un valor añadido real al ensayo de pierna contra parachoques, sin cambiar la esencia del mismo pero proponiendo un nuevo impactador mejorado que incorpore una masa extra que representa a la parte superior del cuerpo y sea válido para toda la flota europea de vehículos independiente de la geometría de su frontal.