923 resultados para Random Integer Partition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the most recent years there is a renovate interest for Mixed Integer Non-Linear Programming (MINLP) problems. This can be explained for different reasons: (i) the performance of solvers handling non-linear constraints was largely improved; (ii) the awareness that most of the applications from the real-world can be modeled as an MINLP problem; (iii) the challenging nature of this very general class of problems. It is well-known that MINLP problems are NP-hard because they are the generalization of MILP problems, which are NP-hard themselves. However, MINLPs are, in general, also hard to solve in practice. We address to non-convex MINLPs, i.e. having non-convex continuous relaxations: the presence of non-convexities in the model makes these problems usually even harder to solve. The aim of this Ph.D. thesis is to give a flavor of different possible approaches that one can study to attack MINLP problems with non-convexities, with a special attention to real-world problems. In Part 1 of the thesis we introduce the problem and present three special cases of general MINLPs and the most common methods used to solve them. These techniques play a fundamental role in the resolution of general MINLP problems. Then we describe algorithms addressing general MINLPs. Parts 2 and 3 contain the main contributions of the Ph.D. thesis. In particular, in Part 2 four different methods aimed at solving different classes of MINLP problems are presented. Part 3 of the thesis is devoted to real-world applications: two different problems and approaches to MINLPs are presented, namely Scheduling and Unit Commitment for Hydro-Plants and Water Network Design problems. The results show that each of these different methods has advantages and disadvantages. Thus, typically the method to be adopted to solve a real-world problem should be tailored on the characteristics, structure and size of the problem. Part 4 of the thesis consists of a brief review on tools commonly used for general MINLP problems, constituted an integral part of the development of this Ph.D. thesis (especially the use and development of open-source software). We present the main characteristics of solvers for each special case of MINLP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial Optimization is a branch of optimization that deals with the problems where the set of feasible solutions is discrete. Routing problem is a well studied branch of Combinatorial Optimization that concerns the process of deciding the best way of visiting the nodes (customers) in a network. Routing problems appear in many real world applications including: Transportation, Telephone or Electronic data Networks. During the years, many solution procedures have been introduced for the solution of different Routing problems. Some of them are based on exact approaches to solve the problems to optimality and some others are based on heuristic or metaheuristic search to find optimal or near optimal solutions. There is also a less studied method, which combines both heuristic and exact approaches to face different problems including those in the Combinatorial Optimization area. The aim of this dissertation is to develop some solution procedures based on the combination of heuristic and Integer Linear Programming (ILP) techniques for some important problems in Routing Optimization. In this approach, given an initial feasible solution to be possibly improved, the method follows a destruct-and-repair paradigm, where the given solution is randomly destroyed (i.e., customers are removed in a random way) and repaired by solving an ILP model, in an attempt to find a new improved solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Im Rahmen dieser Arbeit wurde am System Polyethylenoxid / Polypropylenoxid (PEO / PPO) der Einfluß von Copolymeren auf die Grenzflächenspannung Sigma von Homopolymerblends untersucht. Als Additive dienten Triblockcopolymere EO-block-PO-block-EO bzw. PO-block-EO-block-PO, Diblockcopolymere S-block-EO sowie statistische Copolymere EO-ran-PO. Die Additive wurden so ausgewählt, daß sich Paare von Additiven jeweils in genau einer Eigenschaft (Zusammensetzung, Kettenlänge, Blockanordnung) unterscheiden, in allen anderen Parametern jedoch vergleichbar sind. Die Grenzflächenspannung wurde experimentell mit Hilfe der Pendant-Drop-Methode in Abhängigkeit von der Temperatur ermittelt, wobei das Polymer mit der höheren Dichte, PEO, die Tropfenphase und PPO die Matrixphase bildet. Das Additiv wurde bei Messung der Grenzflächenspannung der ternären Systeme in unterschiedlichen Konzentrationen entweder einer oder beiden Homopolymerphasen zugegeben. Die Konzentrationsabhängigkeit von Sigma lässt sich sowohl mit dem Modell von Tang und Huang als auch mit einem Langmuir-analogen Ansatz gut beschreiben.Um den Zusammenhang zwischen sigma und dem Phasenverhalten zu untersuchen, wurden für einige der ternären Systeme Trübungskurven bei 100°C aufgenommen. Der Vergleich zwischen den Phasendiagrammen und den korrespondierenden Werten von sigma weist darauf hin, dass ein Additiv sigma gerade dann wirksam reduziert, wenn es einem Homopolymer zugefügt wird, mit dem es nur begrenzt verträglich ist, da dann die Triebkraft zur Anlagerung an der Grenzfläche besonders ausgeprägt ist. Das bereits bekannte Phänomen, wonach der Wert der Grenzflächenspannung davon abhängig sein kann, in welcher der Phasen das Additiv zu Beginn der Messung vorliegt, wurde ausführlich untersucht. Es wird angenommen, dass das System nicht in jedem Fall das thermodynamische Gleichgewicht erlangt und der beobachtete Effekt auf das Erreichen stationärer Zustände zurückzuführen ist. Dieses Verhalten kann mit einem Modell beschrieben werden, in welches das Viskositätsverhältnis der Homopolymere sowie der Verteilungskoeffizient des Copolymers zwischen den Homopolymerphasen eingehen. Aus Löslichkeitsparametern wurde der binäre Wechselwirkungsparameter Chi PEO/PPO = 0.18 abgeschätzt und mit diesem die theoretischen Werte für sigma zwischen PEO und PPO nach den Modellen von Roe bzw. Helfand und Tagami berechnet. Der Vergleich mit den experimentellen Daten des binären Systems zeigt, dass beide Ansätze sigma-Werte liefern, die in der Größenordnung der experimentellen Daten liegen, hierbei erweist sich der Ansatz von Roe als besonders geeignet. Die Temperaturabhängigkeit der Grenzflächenspannung wird jedoch durch beide Ansätze unzutreffend wiedergegeben. Mit dem Modell von Helfand und Tagami wurden eine Grenzflächendicke von 7.9 Å und das Dichteprofil der Grenzfläche berechnet. Für die Copolymere EO92PO56EO92 und S9EO22 (die Indices geben die Zahl der Monomereinheiten an) können die Grenzflächenüberschusskonzentrationen, die kritische Mizellenkonzentration sowie der einem Additivmolekül an der Grenzschicht zur Verfügung stehende Platz bestimmt werden.Der Vergleich unterschiedlicher Copolymere hinsichtlich ihrer Fähigkeit, sigma wirkungsvoll herabzusetzen, zeigt, dass im Fall von Triblockcopolymeren die Anordnung der Blöcke gegenüber der Zusammensetzung eine untergeordnete Rolle spielt. Mit zunehmender Kettenlänge nimmt die Effektivität als Compatibilizer sowohl bei Blockcopolymeren als auch bei statistischen Copolymeren zu.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is usual to hear a strange short sentence: «Random is better than...». Why is randomness a good solution to a certain engineering problem? There are many possible answers, and all of them are related to the considered topic. In this thesis I will discuss about two crucial topics that take advantage by randomizing some waveforms involved in signals manipulations. In particular, advantages are guaranteed by shaping the second order statistic of antipodal sequences involved in an intermediate signal processing stages. The first topic is in the area of analog-to-digital conversion, and it is named Compressive Sensing (CS). CS is a novel paradigm in signal processing that tries to merge signal acquisition and compression at the same time. Consequently it allows to direct acquire a signal in a compressed form. In this thesis, after an ample description of the CS methodology and its related architectures, I will present a new approach that tries to achieve high compression by design the second order statistics of a set of additional waveforms involved in the signal acquisition/compression stage. The second topic addressed in this thesis is in the area of communication system, in particular I focused the attention on ultra-wideband (UWB) systems. An option to produce and decode UWB signals is direct-sequence spreading with multiple access based on code division (DS-CDMA). Focusing on this methodology, I will address the coexistence of a DS-CDMA system with a narrowband interferer. To do so, I minimize the joint effect of both multiple access (MAI) and narrowband (NBI) interference on a simple matched filter receiver. I will show that, when spreading sequence statistical properties are suitably designed, performance improvements are possible with respect to a system exploiting chaos-based sequences minimizing MAI only.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’impacchettamento risulta essere importante in molti settori industriali, come il settore minerario, farmaceutico e soprattutto il settore spaziale, in quanto permette di massimizzare il grado di riempimento del propellente solido di un razzo ottenendo prestazioni migliori e notevoli vantaggi economici. Il lavoro di tesi presentato nel seguente elaborato consiste nello studio dell’impacchettamento casuale, in particolare il caso Random Close Packing, di un propellente solido; per fare ciò è stato implementato un codice in ambiente C++ presso l’hangar della Scuola di Ingegneria ed Architettura con sede a Forlì. L’obiettivo principale era quello di trovare la granulometria delle particelle di perclorato di ammonio e delle particelle di alluminio tali da minimizzare gli spazi lasciati vuoti dalle particelle stesse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with three different physical models, where each model involves a random component which is linked to a cubic lattice. First, a model is studied, which is used in numerical calculations of Quantum Chromodynamics.In these calculations random gauge-fields are distributed on the bonds of the lattice. The formulation of the model is fitted into the mathematical framework of ergodic operator families. We prove, that for small coupling constants, the ergodicity of the underlying probability measure is indeed ensured and that the integrated density of states of the Wilson-Dirac operator exists. The physical situations treated in the next two chapters are more similar to one another. In both cases the principle idea is to study a fermion system in a cubic crystal with impurities, that are modeled by a random potential located at the lattice sites. In the second model we apply the Hartree-Fock approximation to such a system. For the case of reduced Hartree-Fock theory at positive temperatures and a fixed chemical potential we consider the limit of an infinite system. In that case we show the existence and uniqueness of minimizers of the Hartree-Fock functional. In the third model we formulate the fermion system algebraically via C*-algebras. The question imposed here is to calculate the heat production of the system under the influence of an outer electromagnetic field. We show that the heat production corresponds exactly to what is empirically predicted by Joule's law in the regime of linear response.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work has mainly focused on the poly (L-lactide) (PLLA) which is a material for multiple applications with performances comparable to those of petrochemical polymers (PP, PS, PET, etc. ...), readily recyclable and also compostable. However, PLLA has certain shortcomings that limit its applications. It is a brittle, hard polymer with a very low elongation at break, hydrophobic, exhibits low crystallization kinetics and takes a long time to degrade. The properties of PLLA may be modified by copolymerization (random, block, and graft) of L-lactide monomers with other co-monomers. In this thesis it has been studied the crystallization and morphology of random copolymers poly (L-lactide-ran-ε-caprolactone) with different compositions of the two monomers since the physical, mechanical, optical and chemical properties of a material depend on this behavior. Thermal analyses were performed by differential scanning calorimetry (DSC) and thermogravimetry (TGA) to observe behaviors due to the different compositions of the copolymers. The crystallization kinetics and morphology of poly (L-lactide-ran-ε-caprolactone) was investigated by polarized light optical microscopy (PLOM) and differential scanning calorimetry (DSC). Their thermal behavior was observed with crystallization from melt. It was observed that with increasing amounts of PCL in the copolymer, there is a decrease of the thermal degradation. Studies on the crystallization kinetics have shown that small quantities of PCL in the copolymer increase the overall crystallization kinetics and the crystal growth rate which decreases with higher quantities of PCL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Le persone che soffrono di insufficienza renale terminale hanno due possibili trattamenti da affrontare: la dialisi oppure il trapianto di organo. Nel caso volessero seguire la seconda strada, oltre che essere inseriti nella lista d'attesa dei donatori deceduti, possono trovare una persona, come il coniuge, un parente o un amico, disposta a donare il proprio rene. Tuttavia, non sempre il trapianto è fattibile: donatore e ricevente possono, infatti, presentare delle incompatibilità a livello di gruppo sanguigno o di tessuto organico. Come risposta a questo tipo di problema nasce il KEP (Kidney Exchange Program), un programma, ampiamente avviato in diverse realtà europee e mondiali, che si occupa di raggruppare in un unico insieme le coppie donatore/ricevente in questa stessa situazione al fine di operare e massimizzare scambi incrociati di reni fra coppie compatibili. Questa tesi approffondisce tale questione andando a valutare la possibilità di unire in un unico insieme internazionale coppie donatore/ricevente provenienti da più paesi. Lo scopo, naturalmente, è quello di poter ottenere un numero sempre maggiore di trapianti effettuati. Lo studio affronta dal punto di vista matematico problematiche legate a tale collaborazione: i paesi che eventualmente accettassero di partecipare a un simile programma, infatti, devono avere la garanzia non solo di ricavarne un vantaggio, ma anche che tale vantaggio sia equamente distribuito fra tutti i paesi partecipanti.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'obiettivo di questo lavoro di tesi è quello di implementare un codice di calcolo, attraverso l'algoritmo di Lubachevsky-Stillinger, in modo da poter prevedere la frazione volumetrica occupata dalle particelle solide che costituiscono il grain negli endoreattori a propellente solido. Particolare attenzione verrà rivolta al problema dell'impacchettamento sferico random (Random-Close Packing) che tale algoritmo cerca di modellare, e le ipotesi per cui tale modellazione può essere applicata al tipo di problema proposto. Inoltre saranno descritte le procedure effettuate per l'ottenimento dei risultati numerici delle simulazioni e la loro motivazione, oltre ai limiti del modello utilizzato e alle migliorie apportate per un'esecuzione più efficiente e veloce.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In questo elaborato si affronta il progetto di un nucleo di calcolo per misure d'impedenza sulla pelle tramite l'utilizzo di segnali pseudo-random. La misura viene effettuata applicando il segnale casuale all'impedenza per ottenere la risposta impulsiva tramite un'operazione di convoluzione. Il nucleo di calcolo è stato implementato in VHDL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In questa tesi si è studiato l’insorgere di eventi critici in un semplice modello neurale del tipo Integrate and Fire, basato su processi dinamici stocastici markoviani definiti su una rete. Il segnale neurale elettrico è stato modellato da un flusso di particelle. Si è concentrata l’attenzione sulla fase transiente del sistema, cercando di identificare fenomeni simili alla sincronizzazione neurale, la quale può essere considerata un evento critico. Sono state studiate reti particolarmente semplici, trovando che il modello proposto ha la capacità di produrre effetti "a cascata" nell’attività neurale, dovuti a Self Organized Criticality (auto organizzazione del sistema in stati instabili); questi effetti non vengono invece osservati in Random Walks sulle stesse reti. Si è visto che un piccolo stimolo random è capace di generare nell’attività della rete delle fluttuazioni notevoli, in particolar modo se il sistema si trova in una fase al limite dell’equilibrio. I picchi di attività così rilevati sono stati interpretati come valanghe di segnale neurale, fenomeno riconducibile alla sincronizzazione.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In most real-life environments, mechanical or electronic components are subjected to vibrations. Some of these components may have to pass qualification tests to verify that they can withstand the fatigue damage they will encounter during their operational life. In order to conduct a reliable test, the environmental excitations can be taken as a reference to synthesize the test profile: this procedure is referred to as “test tailoring”. Due to cost and feasibility reasons, accelerated qualification tests are usually performed. In this case, the duration of the original excitation which acts on the component for its entire life-cycle, typically hundreds or thousands of hours, is reduced. In particular, the “Mission Synthesis” procedure lets to quantify the induced damage of the environmental vibration through two functions: the Fatigue Damage Spectrum (FDS) quantifies the fatigue damage, while the Maximum Response Spectrum (MRS) quantifies the maximum stress. Then, a new random Power Spectral Density (PSD) can be synthesized, with same amount of induced damage, but a specified duration in order to conduct accelerated tests. In this work, the Mission Synthesis procedure is applied in the case of so-called Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic contributions, in the form of sine tones typically due to some rotating parts of the system (e.g. helicopters, engine-mounted components, …). In fact, a proper test tailoring should not only preserve the accumulated fatigue damage, but also the “nature” of the excitation (in this case the sinusoidal components superimposed on the random process) in order to obtain reliable results. The classic time-domain approach is taken as a reference for the comparison of different methods for the FDS calculation in presence of Sine-on-Random vibrations. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is presented.