1000 resultados para combinatorial design
Resumo:
Combinatorial Optimization is becoming ever more crucial, in these days. From natural sciences to economics, passing through urban centers administration and personnel management, methodologies and algorithms with a strong theoretical background and a consolidated real-word effectiveness is more and more requested, in order to find, quickly, good solutions to complex strategical problems. Resource optimization is, nowadays, a fundamental ground for building the basements of successful projects. From the theoretical point of view, Combinatorial Optimization rests on stable and strong foundations, that allow researchers to face ever more challenging problems. However, from the application point of view, it seems that the rate of theoretical developments cannot cope with that enjoyed by modern hardware technologies, especially with reference to the one of processors industry. In this work we propose new parallel algorithms, designed for exploiting the new parallel architectures available on the market. We found that, exposing the inherent parallelism of some resolution techniques (like Dynamic Programming), the computational benefits are remarkable, lowering the execution times by more than an order of magnitude, and allowing to address instances with dimensions not possible before. We approached four Combinatorial Optimization’s notable problems: Packing Problem, Vehicle Routing Problem, Single Source Shortest Path Problem and a Network Design problem. For each of these problems we propose a collection of effective parallel solution algorithms, either for solving the full problem (Guillotine Cuts and SSSPP) or for enhancing a fundamental part of the solution method (VRP and ND). We endorse our claim by presenting computational results for all problems, either on standard benchmarks from the literature or, when possible, on data from real-world applications, where speed-ups of one order of magnitude are usually attained, not uncommonly scaling up to 40 X factors.
Resumo:
When designing metaheuristic optimization methods, there is a trade-off between application range and effectiveness. For large real-world instances of combinatorial optimization problems out-of-the-box metaheuristics often fail, and optimization methods need to be adapted to the problem at hand. Knowledge about the structure of high-quality solutions can be exploited by introducing a so called bias into one of the components of the metaheuristic used. These problem-specific adaptations allow to increase search performance. This thesis analyzes the characteristics of high-quality solutions for three constrained spanning tree problems: the optimal communication spanning tree problem, the quadratic minimum spanning tree problem and the bounded diameter minimum spanning tree problem. Several relevant tree properties, that should be explored when analyzing a constrained spanning tree problem, are identified. Based on the gained insights on the structure of high-quality solutions, efficient and robust solution approaches are designed for each of the three problems. Experimental studies analyze the performance of the developed approaches compared to the current state-of-the-art.
Resumo:
Offset printing is a common method to produce large amounts of printed matter. We consider a real-world offset printing process that is used to imprint customer-specific designs on napkin pouches. The print- ing technology used yields a number of specific constraints. The planning problem consists of allocating designs to printing-plate slots such that the given customer demand for each design is fulfilled, all technologi- cal and organizational constraints are met and the total overproduction and setup costs are minimized. We formulate this planning problem as a mixed-binary linear program, and we develop a multi-pass matching-based savings heuristic. We report computational results for a set of problem instances devised from real-world data.
Resumo:
In this study, we present a framework based on ant colony optimization (ACO) for tackling combinatorial problems. ACO algorithms have been applied to many diferent problems, focusing on algorithmic variants that obtain high-quality solutions. Usually, the implementations are re-done for various problem even if they maintain the same details of the ACO algorithm. However, our goal is to generate a sustainable framework for applications on permutation problems. We concentrate on understanding the behavior of pheromone trails and specific methods that can be combined. Eventually, we will propose an automatic offline configuration tool to build an efective algorithm. ---RESUMEN---En este trabajo vamos a presentar un framework basado en la familia de algoritmos ant colony optimization (ACO), los cuales están dise~nados para enfrentarse a problemas combinacionales. Los algoritmos ACO han sido aplicados a diversos problemas, centrándose los investigadores en diversas variantes que obtienen buenas soluciones. Normalmente, las implementaciones se tienen que rehacer, inclusos si se mantienen los mismos detalles para los algoritmos ACO. Sin embargo, nuestro objetivo es generar un framework sostenible para aplicaciones sobre problemas de permutaciones. Nos centraremos en comprender el comportamiento de la sendas de feromonas y ciertos métodos con los que pueden ser combinados. Finalmente, propondremos una herramienta para la configuraron automática offline para construir algoritmos eficientes.
Resumo:
Molecular analysis of complex modular structures, such as promoter regions or multi-domain proteins, often requires the creation of families of experimental DNA constructs having altered composition, order, or spacing of individual modules. Generally, creation of every individual construct of such a family uses a specific combination of restriction sites. However, convenient sites are not always available and the alternatives, such as chemical resynthesis of the experimental constructs or engineering of different restriction sites onto the ends of DNA fragments, are costly and time consuming. A general cloning strategy (nucleic acid ordered assembly with directionality, NOMAD; WWW resource locator http:@Lmb1.bios.uic.edu/NOMAD/NOMAD.htm l) is proposed that overcomes these limitations. Use of NOMAD ensures that the production of experimental constructs is no longer the rate-limiting step in applications that require combinatorial rearrangement of DNA fragments. NOMAD manipulates DNA fragments in the form of "modules" having a standardized cohesive end structure. Specially designed "assembly vectors" allow for sequential and directional insertion of any number of modules in an arbitrary predetermined order, using the ability of type IIS restriction enzymes to cut DNA outside of their recognition sequences. Studies of regulatory regions in DNA, such as promoters, replication origins, and RNA processing signals, construction of chimeric proteins, and creation of new cloning vehicles, are among the applications that will benefit from using NOMAD.
Resumo:
A major problem in de novo design of enzyme inhibitors is the unpredictability of the induced fit, with the shape of both ligand and enzyme changing cooperatively and unpredictably in response to subtle structural changes within a ligand. We have investigated the possibility of dampening the induced fit by using a constrained template as a replacement for adjoining segments of a ligand. The template preorganizes the ligand structure, thereby organizing the local enzyme environment. To test this approach, we used templates consisting of constrained cyclic tripeptides, formed through side chain to main chain linkages, as structural mimics of the protease-bound extended beta-strand conformation of three adjoining amino acid residues at the N- or C-terminal sides of the scissile bond of substrates. The macrocyclic templates were derivatized to a range of 30 structurally diverse molecules via focused combinatorial variation of nonpeptidic appendages incorporating a hydroxyethylamine transition-state isostere. Most compounds in the library were potent inhibitors of the test protease (HIV-1 protease). Comparison of crystal structures for five protease-inhibitor complexes containing an N-terminal macrocycle and three protease-inhibitor complexes containing a C-terminal macrocycle establishes that the macrocycles fix their surrounding enzyme environment, thereby permitting independent variation of acyclic inhibitor components with only local disturbances to the protease. In this way, the location in the protease of various acyclic fragments on either side of the macrocyclic template can be accurately predicted. This type of templating strategy minimizes the problem of induced fit, reducing unpredictable cooperative effects in one inhibitor region caused by changes to adjacent enzyme-inhibitor interactions. This idea might be exploited in template-based approaches to inhibitors of other proteases, where a beta-strand mimetic is also required for recognition, and also other protein-binding ligands where different templates may be more appropriate.
Resumo:
The cyclotides are a recently discovered family of miniproteins that contain a head-to-tail cyclized backbone and a knotted arrangement of disulfide bonds. They are approximately 30 amino acids in size and are present in high abundance in plants from the Violaceae, Rubiaceae, and Cucurbitaceae families, with individual plants containing a suite of up to 100 cyclotides. They have a diverse range of biological activities, including uterotonic, anti-HIV, antitumor, and antimicrobial activities, although their natural function is likely that of defending their host plants from pathogens and pests. This review focuses on the structural aspects of cyclotides, which may be thought of as a natural combinatorial peptide template in which a wide range of amino acids is displayed on a compact molecular core made up of the cyclic cystine knot structural motif. Cyclotides are exceptionally stable and are resistant to denaturation via thermal, chemical, or enzymatic treatments. The struclural features that contribute to their remarkable stability are described ill this review. (c) 2006 Wiley Periodicals, Inc.
Resumo:
One hundred sixty-eight multiply substituted 1,4-benzodiazepines have been prepared by a five-step solid-phase combinatorial approach using syn-phase crowns as a solid support and a hydroxymethyl-phenoxy-acetamido linkage (Wang linker). The substituents of the 1,4-benzodiazepine scaffold have been varied in the -3, -5, -7, and 8-positions and the combinatorial library was evaluated in a cholecystokinin (CCK) radioligand binding assay. 3-Alkylated 1,4-benzodiazepines with selectivity towards the CCK-B (CCK2) receptor have been optimized on the lipophilic side chain, the ketone moiety, and the stereochemistry at the 3-position. Various novel 3-alkylated compounds were synthesized and [S]3-propyl-5-phenyl-1,4-benzodiazepin-2-one, [S]NV-A, has shown a CCK-B selective binding at about 180 nM. Fifty-eight compounds of this combinatorial library were purified by preparative TLC and 25 compounds were isolated and fully characterized by TLC, IR, APCI-MS, and 1H/13C-NMR spectroscopy.
Resumo:
Partially supported by the Bulgarian Science Fund contract with TU Varna, No 487.
Resumo:
This thesis focuses on the development of algorithms that will allow protein design calculations to incorporate more realistic modeling assumptions. Protein design algorithms search large sequence spaces for protein sequences that are biologically and medically useful. Better modeling could improve the chance of success in designs and expand the range of problems to which these algorithms are applied. I have developed algorithms to improve modeling of backbone flexibility (DEEPer) and of more extensive continuous flexibility in general (EPIC and LUTE). I’ve also developed algorithms to perform multistate designs, which account for effects like specificity, with provable guarantees of accuracy (COMETS), and to accommodate a wider range of energy functions in design (EPIC and LUTE).
Resumo:
The quality of a heuristic solution to a NP-hard combinatorial problem is hard to assess. A few studies have advocated and tested statistical bounds as a method for assessment. These studies indicate that statistical bounds are superior to the more widely known and used deterministic bounds. However, the previous studies have been limited to a few metaheuristics and combinatorial problems and, hence, the general performance of statistical bounds in combinatorial optimization remains an open question. This work complements the existing literature on statistical bounds by testing them on the metaheuristic Greedy Randomized Adaptive Search Procedures (GRASP) and four combinatorial problems. Our findings confirm previous results that statistical bounds are reliable for the p-median problem, while we note that they also seem reliable for the set covering problem. For the quadratic assignment problem, the statistical bounds has previously been found reliable when obtained from the Genetic algorithm whereas in this work they found less reliable. Finally, we provide statistical bounds to four 2-path network design problem instances for which the optimum is currently unknown.
Resumo:
Targeted cancer therapy aims to disrupt aberrant cellular signalling pathways. Biomarkers are surrogates of pathway state, but there is limited success in translating candidate biomarkers to clinical practice due to the intrinsic complexity of pathway networks. Systems biology approaches afford better understanding of complex, dynamical interactions in signalling pathways targeted by anticancer drugs. However, adoption of dynamical modelling by clinicians and biologists is impeded by model inaccessibility. Drawing on computer games technology, we present a novel visualisation toolkit, SiViT, that converts systems biology models of cancer cell signalling into interactive simulations that can be used without specialist computational expertise. SiViT allows clinicians and biologists to directly introduce for example loss of function mutations and specific inhibitors. SiViT animates the effects of these introductions on pathway dynamics, suggesting further experiments and assessing candidate biomarker effectiveness. In a systems biology model of Her2 signalling we experimentally validated predictions using SiViT, revealing the dynamics of biomarkers of drug resistance and highlighting the role of pathway crosstalk. No model is ever complete: the iteration of real data and simulation facilitates continued evolution of more accurate, useful models. SiViT will make accessible libraries of models to support preclinical research, combinatorial strategy design and biomarker discovery.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
This paper proposes and investigates a metaheuristic tabu search algorithm (TSA) that generates optimal or near optimal solutions sequences for the feedback length minimization problem (FLMP) associated to a design structure matrix (DSM). The FLMP is a non-linear combinatorial optimization problem, belonging to the NP-hard class, and therefore finding an exact optimal solution is very hard and time consuming, especially on medium and large problem instances. First, we introduce the subject and provide a review of the related literature and problem definitions. Using the tabu search method (TSM) paradigm, this paper presents a new tabu search algorithm that generates optimal or sub-optimal solutions for the feedback length minimization problem, using two different neighborhoods based on swaps of two activities and shifting an activity to a different position. Furthermore, this paper includes numerical results for analyzing the performance of the proposed TSA and for fixing the proper values of its parameters. Then we compare our results on benchmarked problems with those already published in the literature. We conclude that the proposed tabu search algorithm is very promising because it outperforms the existing methods, and because no other tabu search method for the FLMP is reported in the literature. The proposed tabu search algorithm applied to the process layer of the multidimensional design structure matrices proves to be a key optimization method for an optimal product development.