863 resultados para Combinatorial Optimization
Resumo:
Magdeburg, Univ., Fak. für Mathematik, Habil.-Schr., 2012
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
Les cellules CD8? T cytolytiques (CTL) sont les principaux effecteurs du système immunitaire adaptatif contre les infections et les tumeurs. La récente identification d?antigènes tumoraux humains reconnus par des cellules T cytolytiques est la base pour le, développement des vaccins antigène spécifiques contre le cancer. Le nombre d?antigènes tumoraux reconnus par des CTL que puisse être utilisé comme cible pour la vaccination des patients atteints du cancer est encore limité. Une nouvelle technique, simple et rapide, vient d?être proposée pour l?identification d?antigènes reconnus par des CTL. Elle se base sur l?utilisation de librairies combinatoriales de peptides arrangées en un format de "scanning" ou balayage par position (PS-SCL). La première partie de cette étude a consisté à valider cette nouvelle technique par une analyse détaillée de la reconnaissance des PS-SCL par différents clones de CTL spécifiques pour des antigènes associés à la tumeur (TAA) connus ainsi que par des clones de spécificité inconnue. Les résultats de ces analyses révèlent que pour tous les clones, la plupart des acides aminés qui composent la séquence du peptide antigénique naturel ont été identifiés par l?utilisation des PS-SCL. Les résultats obtenus ont permis d?identifier des peptides analogues ayant une antigènicité augmentée par rapport au peptide naturel, ainsi que des peptides comportant de multiples modifications de séquence, mais présentant la même réactivité que le peptide naturel. La deuxième partie de cette étude a consisté à effectuer des analyses biométriques des résultats complexes générés par la PS-SCL. Cette approche a permis l?identification des séquences correspondant aux épitopes naturels à partir de bases de données de peptides publiques. Parmi des milliers de peptides, les séquences naturelles se trouvent comprises dans les 30 séquences ayant les scores potentiels de stimulation les plus élevés pour chaque TAA étudié. Mais plus important encore, l?utilisation des PS-SCL avec un clone réactif contre des cellules tumorales mais de spécificité inconnue nous a permis d?identifier I?epitope reconnu par ce clone. Les données présentées ici encouragent l?utilisation des PS-SCL pour l?identification et l?optimisation d?épitopes pour des CTL réactifs anti-tumoraux, ainsi que pour l?étude de la reconnaissance dégénérée d?antigènes par les CTL.<br/><br/>CD8+ cytolytic T lymphocytes (CTL) are the main effector cells of the adaptive immune system against infection and tumors. The recent identification of moleculariy defined human tumor Ags recognized by autologous CTL has opened new opportunities for the development of Ag-specific cancer vaccines. Despite extensive work, however, the number of CTL-defined tumor Ags that are suitable targets for the vaccination of cancer patients is still limited, especially because of the laborious and time consuming nature of the procedures currentiy used for their identification. The use of combinatorial peptide libraries in positionai scanning format (Positional Scanning Synthetic Combinatorial Libraries, PS-SCL)' has recently been proposed as an alternative approach for the identification of these epitopes. To validate this approach, we analyzed in detail the recognition of PS-SCL by tumor-reactive CTL clones specific for multiple well-defined tumor-associated Ags (TAA) as well as by tumor-reactive CTL clones of unknown specificity. The results of these analyses revealed that for all the TAA-specific clones studied most of the amino acids composing the native antigenic peptide sequences could be identified through the use of PS-SCL. Based on the data obtained from the screening of PS-SCL, we could design peptide analogs of increased antigenicity as well as cross-reactive analog peptides containing multiple amino acid substitutions. In addition, the resuits of PS-SCL-screening combined with a recently developed biometric data analysis (PS-SCL-based biometric database analysis) allowed the identification of the native peptides in public protein databases among the 30 most active sequences, and this was the case for all the TAA studied. More importantiy, the screening of PS- SCL with a tumor-reactive CTL clone of unknown specificity resulted in the identification of the actual epitope. Overall, these data encourage the use of PS-SCL not oniy for the identification and optimization of tumor-associated CTL epitopes, but also for the analysis of degeneracy in T lymphocyte receptor (TCR) recognition of tumor Ags.<br/><br/>Les cellules T CD8? cytolytiques font partie des globules blancs du sang et sont les principales responsables de la lutte contre les infections et les tumeurs. Les immunologistes cherchent depuis des années à identifier des molécules exprimées et présentées à la surface des tumeurs qui puissent être reconnues par des cellules T CD8? cytolytiques capables ensuite de tuer ces tumeurs de façon spécifique. Ce type de molécules représente la base pour le développement de vaccins contre le cancer puisqu?elles pourraient être injectées aux patients afin d?induire une réponse anti- tumorale. A présent, il y a très peu de molécules capables de stimuler le système immunitaire contre les tumeurs qui sont connues parce que les techniques développées à ce jour pour leur identification sont complexes et longues. Une nouvelle technique vient d?être proposée pour l?identification de ce type de molécules qui se base sur l?utilisation de librairies de peptides. Ces librairies représentent toutes les combinaisons possibles des composants de base des molécules recherchées. La première partie de cette étude a consisté à valider cette nouvelle technique en utilisant des cellules T CD8? cytolytiques capables de tuer des cellules tumorales en reconnaissant une molécule connue présente à leur surface. On a démontré que l?utilisation des librairies permet d?identifier la plupart des composants de base de la molécule reconnue par les cellules T CD8? cytolytiques utilisées. La deuxième partie de cette étude a consisté à effectuer une recherche des molécules potentiellement actives dans des protéines présentes dans des bases des données en utilisant un programme informatique qui permet de classer les molécules sur la base de leur activité biologique. Parmi des milliers de molécules de la base de données, celles reconnues par nos cellules T CD8? cytolytiques ont été trouvées parmi les plus actives. Plus intéressant encore, la combinaison de ces deux techniques nous a permis d?identifier la molécule reconnue par une population de cellules T CD8? cytolytiques ayant une activité anti-tumorale, mais pour laquelle on ne connaissait pas la spécificité. Nos résultats encouragent l?utilisation des librairies pour trouver et optimiser des molécules reconnues spécifiquement par des cellules T CD8? cytolytiques capables de tuer des tumeurs.
Resumo:
This thesis studies the use of heuristic algorithms in a number of combinatorial problems that occur in various resource constrained environments. Such problems occur, for example, in manufacturing, where a restricted number of resources (tools, machines, feeder slots) are needed to perform some operations. Many of these problems turn out to be computationally intractable, and heuristic algorithms are used to provide efficient, yet sub-optimal solutions. The main goal of the present study is to build upon existing methods to create new heuristics that provide improved solutions for some of these problems. All of these problems occur in practice, and one of the motivations of our study was the request for improvements from industrial sources. We approach three different resource constrained problems. The first is the tool switching and loading problem, and occurs especially in the assembly of printed circuit boards. This problem has to be solved when an efficient, yet small primary storage is used to access resources (tools) from a less efficient (but unlimited) secondary storage area. We study various forms of the problem and provide improved heuristics for its solution. Second, the nozzle assignment problem is concerned with selecting a suitable set of vacuum nozzles for the arms of a robotic assembly machine. It turns out that this is a specialized formulation of the MINMAX resource allocation formulation of the apportionment problem and it can be solved efficiently and optimally. We construct an exact algorithm specialized for the nozzle selection and provide a proof of its optimality. Third, the problem of feeder assignment and component tape construction occurs when electronic components are inserted and certain component types cause tape movement delays that can significantly impact the efficiency of printed circuit board assembly. Here, careful selection of component slots in the feeder improves the tape movement speed. We provide a formal proof that this problem is of the same complexity as the turnpike problem (a well studied geometric optimization problem), and provide a heuristic algorithm for this problem.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
In the late seventies, Megiddo proposed a way to use an algorithm for the problem of minimizing a linear function a(0) + a(1)x(1) + ... + a(n)x(n) subject to certain constraints to solve the problem of minimizing a rational function of the form (a(0) + a(1)x(1) + ... + a(n)x(n))/(b(0) + b(1)x(1) + ... + b(n)x(n)) subject to the same set of constraints, assuming that the denominator is always positive. Using a rather strong assumption, Hashizume et al. extended Megiddo`s result to include approximation algorithms. Their assumption essentially asks for the existence of good approximation algorithms for optimization problems with possibly negative coefficients in the (linear) objective function, which is rather unusual for most combinatorial problems. In this paper, we present an alternative extension of Megiddo`s result for approximations that avoids this issue and applies to a large class of optimization problems. Specifically, we show that, if there is an alpha-approximation for the problem of minimizing a nonnegative linear function subject to constraints satisfying a certain increasing property then there is an alpha-approximation (1 1/alpha-approximation) for the problem of minimizing (maximizing) a nonnegative rational function subject to the same constraints. Our framework applies to covering problems and network design problems, among others.
Resumo:
Antimicrobial peptides (AMPs) are effector molecules of innate immune systems found in different groups of organisms, including microorganisms, plants, insects, amphibians and humans. These peptides exhibit several structural motifs but the most abundant AMPs assume an amphipathic alpha-helical structure. The alpha-helix forming antimicrobial peptides are excellent candidates for protein engineering leading to an optimization of their biological activity and target specificity. Nowadays several approaches are available and this review deals with the use of combinatorial synthesis and directed evolution in order to provide a high-throughput source of antimicrobial peptides analogues with enhanced lytic activity and specificity.
Resumo:
The procurement of transportation services via large-scale combinatorial auctions involves a couple of complex decisions whose outcome highly influences the performance of the tender process. This paper examines the shipper's task of selecting a subset of the submitted bids which efficiently trades off total procurement cost against expected carrier performance. To solve this bi-objective winner determination problem, we propose a Pareto-based greedy randomized adaptive search procedure (GRASP). As a post-optimizer we use a path relinking procedure which is hybridized with branch-and-bound. Several variants of this algorithm are evaluated by means of artificial test instances which comply with important real-world characteristics. The two best variants prove superior to a previously published Pareto-based evolutionary algorithm.
Resumo:
In this study, we present a framework based on ant colony optimization (ACO) for tackling combinatorial problems. ACO algorithms have been applied to many diferent problems, focusing on algorithmic variants that obtain high-quality solutions. Usually, the implementations are re-done for various problem even if they maintain the same details of the ACO algorithm. However, our goal is to generate a sustainable framework for applications on permutation problems. We concentrate on understanding the behavior of pheromone trails and specific methods that can be combined. Eventually, we will propose an automatic offline configuration tool to build an efective algorithm. ---RESUMEN---En este trabajo vamos a presentar un framework basado en la familia de algoritmos ant colony optimization (ACO), los cuales están dise~nados para enfrentarse a problemas combinacionales. Los algoritmos ACO han sido aplicados a diversos problemas, centrándose los investigadores en diversas variantes que obtienen buenas soluciones. Normalmente, las implementaciones se tienen que rehacer, inclusos si se mantienen los mismos detalles para los algoritmos ACO. Sin embargo, nuestro objetivo es generar un framework sostenible para aplicaciones sobre problemas de permutaciones. Nos centraremos en comprender el comportamiento de la sendas de feromonas y ciertos métodos con los que pueden ser combinados. Finalmente, propondremos una herramienta para la configuraron automática offline para construir algoritmos eficientes.
Resumo:
This paper presents an adaptation of the Cross-Entropy (CE) method to optimize fuzzy logic controllers. The CE is a recently developed optimization method based on a general Monte-Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling. This work shows the application of this optimization method to optimize the inputs gains, the location and size of the different membership functions' sets of each variable, as well as the weight of each rule from the rule's base of a fuzzy logic controller (FLC). The control system approach presented in this work was designed to command the orientation of an unmanned aerial vehicle (UAV) to modify its trajectory for avoiding collisions. An onboard looking forward camera was used to sense the environment of the UAV. The information extracted by the image processing algorithm is the only input of the fuzzy control approach to avoid the collision with a predefined object. Real tests with a quadrotor have been done to corroborate the improved behavior of the optimized controllers at different stages of the optimization process.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The paper considers vector discrete optimization problem with linear fractional functions of criteria on a feasible set that has combinatorial properties of combinations. Structural properties of a feasible solution domain and of Pareto–optimal (efficient), weakly efficient, strictly efficient solution sets are examined. A relation between vector optimization problems on a combinatorial set of combinations and on a continuous feasible set is determined. One possible approach is proposed in order to solve a multicriteria combinatorial problem with linear- fractional functions of criteria on a set of combinations.
Resumo:
The major barrier to practical optimization of pavement preservation programming has always been that for formulations where the identity of individual projects is preserved, the solution space grows exponentially with the problem size to an extent where it can become unmanageable by the traditional analytical optimization techniques within reasonable limit. This has been attributed to the problem of combinatorial explosion that is, exponential growth of the number of combinations. The relatively large number of constraints often presents in a real-life pavement preservation programming problems and the trade-off considerations required between preventive maintenance, rehabilitation and reconstruction, present yet another factor that contributes to the solution complexity. In this research study, a new integrated multi-year optimization procedure was developed to solve network level pavement preservation programming problems, through cost-effectiveness based evolutionary programming analysis, using the Shuffled Complex Evolution (SCE) algorithm.^ A case study problem was analyzed to illustrate the robustness and consistency of the SCE technique in solving network level pavement preservation problems. The output from this program is a list of maintenance and rehabilitation treatment (M&R) strategies for each identified segment of the network in each programming year, and the impact on the overall performance of the network, in terms of the performance levels of the recommended optimal M&R strategy. ^ The results show that the SCE is very efficient and consistent in the simultaneous consideration of the trade-off between various pavement preservation strategies, while preserving the identity of the individual network segments. The flexibility of the technique is also demonstrated, in the sense that, by suitably coding the problem parameters, it can be used to solve several forms of pavement management programming problems. It is recommended that for large networks, some sort of decomposition technique should be applied to aggregate sections, which exhibit similar performance characteristics into links, such that whatever M&R alternative is recommended for a link can be applied to all the sections connected to it. In this way the problem size, and hence the solution time, can be greatly reduced to a more manageable solution space. ^ The study concludes that the robust search characteristics of SCE are well suited for solving the combinatorial problems in long-term network level pavement M&R programming and provides a rich area for future research. ^