713 resultados para Programming (Mathematics)
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
In the line opened by Kalai and Muller (1997), we explore new conditions on prefernce domains which make it possible to avoid Arrow's impossibility result. In our main theorem, we provide a complete characterization of the domains admitting nondictorial Arrovian social welfare functions with ties (i.e. including indifference in the range) by introducing a notion of strict decomposability. In the proof, we use integer programming tools, following an approach first applied to social choice theory by Sethuraman, Teo and Vohra ((2003), (2006)). In order to obtain a representation of Arrovian social welfare functions whose range can include indifference, we generalize Sethuraman et al.'s work and specify integer programs in which variables are allowed to assume values in the set {0, 1/2, 1}: indeed, we show that, there exists a one-to-one correspondence between solutions of an integer program defined on this set and the set of all Arrovian social welfare functions - without restrictions on the range.
Resumo:
Using the integer programming approach introduced by Sethuraman, Teo, and Vohra (2003), we extend the analysis of the preference domains containing an inseparable ordered pair, initiated by Kalai and Ritz (1978). We show that these domains admit not only Arrovian social welfare functions \without ties," but also Arrovian social welfare functions \with ties," since they satisfy the strictly decomposability condition introduced by Busetto, Codognato, and Tonin (2012). Moreover, we go further in the comparison between Kalai and Ritz (1978)'s inseparability and Arrow (1963)'s single-peak restrictions, showing that the former condition is more \respectable," in the sense of Muller and Satterthwaite (1985).
Resumo:
There are controversial reports about the effect of aging on movement preparation, and it is unclear to which extent cognitive and/or motor related cerebral processes may be affected. This study examines the age effects on electro-cortical oscillatory patterns during various motor programming tasks, in order to assess potential differences according to the mode of action selection. Twenty elderly (EP, 60-84 years) and 20 young (YP, 20-29 years) participants with normal cognition underwent 3 pre-cued response tasks (S1-S2 paradigm). S1 carried either complete information on response side (Full; stimulus-driven motor preparation), no information (None; general motor alertness), or required free response side selection (Free; internally-driven motor preparation). Electroencephalogram (EEG) was recorded using 64 surface electrodes. Alpha (8-12 Hz) desynchronization (ERD)/synchronization (ERS) and motor-related amplitude asymmetries (MRAA) were analyzed during the S1-S2 interval. Reaction times (RTs) to S2 were slower in EP than YP, and in None than in the other 2 tasks. There was an Age x Task interaction due to increased RTs in Free compared to Full in EP only. Central bilateral and midline activation (alpha ERD) was smaller in EP than YP in None. In Full just before S2, readiness to move was reflected by posterior midline inhibition (alpha ERS) in both groups. In Free, such inhibition was present only in YP. Moreover, MRAA showed motor activity lateralization in both groups in Full, but only in YP in Free. The results indicate reduced recruitment of motor regions for motor alertness in the elderly. They further show less efficient cerebral processes subtending free selection of movement in elders, suggesting reduced capacity for internally-driven action with age.
Resumo:
El projecte Statmedia 3 ha consolidat definitivament la proposta de les assignatures Bioestadística de Biologia, Anàlisi de dades de Ciències Ambientals i d’Estadística Matemàtica de la Diplomatura renovant una part del material creat amb Statmedia 2. S’han inclòs a més Matemàtiques d’Ambientals i Introducció a la Probabilitat del Grau d’Estadística. L’anterior MQD abastava només pràctiques mentre que aquest projecte permet una oferta diversa d’activitats individualitzades. La individualització consisteix en que cada estudiant rep una proposta de cas personalitzada amb dades diferents. Les activitats poden ser programades presencialment o no, però la clau de l’èxit de l’activitat és que l’alumne obtingui reconeixement del seu treball en l’avaluació continuada. La valoració que fan als alumnes de Statmedia és molt bo, i observem que es produeix una millora en els resultats acadèmics. Statmedia 3 ha implicat un important esforç en la vessant informàtica del projecte, la barreja de tecnologies que utilitzem son punteres: Ajax, servlets i applets Java... Hem posat a punt un assistent on-line per dissenyar documents i planificar activitats que facilita la tasca dels professors. La nostre participació en primera línea del procés de convergència a l’EEES ens ha permès anticipar alguns canvis, i s’ha traduït en que el claustre del Departament d’Estadística assumís que Statmedia és una metodologia essencial dels seus plans docents. El projecte continua en un quart projecte MQD consecutiu, on desplegarem la nova tecnologia implementada. L’objectiu principal serà dotar a les assignatures dels 7 graus on participa el departament d’activitats individualitzades en forma de casos pràctics, problemes i proves diverses. La col·lecció de material emmagatzemada en la nostra biblioteca, forjada després de quasi deu anys de treball continuat, juntament amb l’experiència acumulada de com utilitzar Statmedia de la forma més eficient han començat a ser explotades en els nous graus aquest mateix curs 2009-2010.
Resumo:
In This work we present a Web-based tool developed with the aim of reinforcing teaching and learning of introductory programming courses. This tool provides support for teaching and learning. From the teacher's perspective the system introduces important gains with respect to the classical teaching methodology. It reinforces lecture and laboratory sessions, makes it possible to give personalized attention to the student, assesses the degree of participation of the students and most importantly, performs a continuous assessment of the student's progress. From the student's perspective it provides a learning framework, consisting in a help environment and a correction environment, which facilitates their personal work. With this tool students are more motivated to do programming
Resumo:
Large projects evaluation rises well known difficulties because -by definition- they modify the current price system; their public evaluation presents additional difficulties because they modify too existing shadow prices without the project. This paper analyzes -first- the basic methodologies applied until late 80s., based on the integration of projects in optimization models or, alternatively, based on iterative procedures with information exchange between two organizational levels. New methodologies applied afterwards are based on variational inequalities, bilevel programming and linear or nonlinear complementarity. Their foundations and different applications related with project evaluation are explored. As a matter of fact, these new tools are closely related among them and can treat more complex cases involving -for example- the reaction of agents to policies or the existence of multiple agents in an environment characterized by common functions representing demands or constraints on polluting emissions.
Resumo:
Business processes designers take into account the resources that the processes would need, but, due to the variable cost of certain parameters (like energy) or other circumstances, this scheduling must be done when business process enactment. In this report we formalize the energy aware resource cost, including time and usage dependent rates. We also present a constraint programming approach and an auction-based approach to solve the mentioned problem including a comparison of them and a comparison of the proposed algorithms for solving them
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
We present a new unifying framework for investigating throughput-WIP(Work-in-Process) optimal control problems in queueing systems,based on reformulating them as linear programming (LP) problems withspecial structure: We show that if a throughput-WIP performance pairin a stochastic system satisfies the Threshold Property we introducein this paper, then we can reformulate the problem of optimizing alinear objective of throughput-WIP performance as a (semi-infinite)LP problem over a polygon with special structure (a thresholdpolygon). The strong structural properties of such polygones explainthe optimality of threshold policies for optimizing linearperformance objectives: their vertices correspond to the performancepairs of threshold policies. We analyze in this framework theversatile input-output queueing intensity control model introduced byChen and Yao (1990), obtaining a variety of new results, including (a)an exact reformulation of the control problem as an LP problem over athreshold polygon; (b) an analytical characterization of the Min WIPfunction (giving the minimum WIP level required to attain a targetthroughput level); (c) an LP Value Decomposition Theorem that relatesthe objective value under an arbitrary policy with that of a giventhreshold policy (thus revealing the LP interpretation of Chen andYao's optimality conditions); (d) diminishing returns and invarianceproperties of throughput-WIP performance, which underlie thresholdoptimality; (e) a unified treatment of the time-discounted andtime-average cases.
Resumo:
This paper introduces the approach of using Total Unduplicated Reach and Frequency analysis (TURF) to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. The results obtained through our exact algorithm are presented, and this method shows to be extremely efficient both in obtaining optimal solutions and in computing time for very large instances of the problem at hand. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.