882 resultados para Large-Scale Optimization
Resumo:
The current cosmological dark sector (dark matter plus dark energy) is challenging our comprehension about the physical processes taking place in the Universe. Recently, some authors tried to falsify the basic underlying assumptions of such dark matterdark energy paradigm. In this Letter, we show that oversimplifications of the measurement process may produce false positives to any consistency test based on the globally homogeneous and isotropic ? cold dark matter (?CDM) model and its expansion history based on distance measurements. In particular, when local inhomogeneity effects due to clumped matter or voids are taken into account, an apparent violation of the basic assumptions (Copernican Principle) seems to be present. Conversely, the amplitude of the deviations also probes the degree of reliability underlying the phenomenological DyerRoeder procedure by confronting its predictions with the accuracy of the weak lensing approach. Finally, a new method is devised to reconstruct the effects of the inhomogeneities in a ?CDM model, and some suggestions of how to distinguish between clumpiness (or void) effects from different cosmologies are discussed.
The boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems
Resumo:
Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Resumo:
Among various nanoparticles, noble metal nanoparticles have attracted considerable attention due to their optical, catalytic and conducting properties. This work has been focused on the development of an innovative method of synthesis for the preparation of metal nanosuspensions of Au, Ag, Cu, in order to achieve stable sols, showing suitable features to allow an industrial scale up of the processes. The research was developed in collaboration with a company interested in the large scale production of the studied nanosuspensions. In order to develop a commercial process, high solid concentration, long time colloidal stability and particle size control, are required. Two synthesis routes, differing by the used solvents, have been implemented: polyol based and water based synthesis. In order to achieve a process intensification the microwave heating has been applied. As a result, colloidal nanosuspensions with suitable dimensions, good optical properties, very high solid content and good stability, have been synthesized by simple and environmental friendly methods. Particularly, due to some interesting results an optimized synthesis process has been patented. Both water and polyol based synthesis, developed in the presence of a reducing agent and of a chelating polymer, allowed to obtain particle size-control and colloidal stability by tuning the different parameters. Furthermore, it has been verified that microwave device, due to its rapid and homogeneous heating, provides some advantages over conventional method. In order to optimize the final suspensions properties, for each synthesis it has been studied the effect of different parameters (temperature, time, precursors concentrations, etc) and throughout a specific optimization action a right control on nucleation and growth processes has been achieved. The achieved nanoparticles were confirmed by XRD analysis to be the desired metal phases, even at the lowest synthesis temperatures. The particles showed a diameter, measured by STEM and dynamic light scattering technique (DLS), ranging from 10 to 60 nm. Surface plasmon resonance (SPR) was monitored by UV-VIS spectroscopy confirming its dependence by nanoparticles size and shape. Moreover the reaction yield has been assessed by ICP analysis performed on the unreacted metal cations. Finally, thermal conductivity and antibacterial activity characterizations of copper and silver sols respectively are now ongoing in order to check their application as nanofluid in heat transfer processes and as antibacterial agent.
Resumo:
The production, segregation and migration of melt and aqueous fluids (henceforth called liquid) plays an important role for the transport of mass and energy within the mantle and the crust of the Earth. Many properties of large-scale liquid migration processes such as the permeability of a rock matrix or the initial segregation of newly formed liquid from the host-rock depends on the grain-scale distribution and behaviour of liquid. Although the general mechanisms of liquid distribution at the grain-scale are well understood, the influence of possibly important modifying processes such as static recrystallization, deformation, and chemical disequilibrium on the liquid distribution is not well constrained. For this thesis analogue experiments were used that allowed to investigate the interplay of these different mechanisms in-situ. In high-temperature environments where melts are produced, the grain-scale distribution in “equilibrium” is fully determined by the liquid fraction and the ratio between the solid-solid and the solid-liquid surface energy. The latter is commonly expressed as the dihedral or wetting angle between two grains and the liquid phase (Chapter 2). The interplay of this “equilibrium” liquid distribution with ongoing surface energy driven recrystallization is investigated in Chapter 4 and 5 with experiments using norcamphor plus ethanol liquid. Ethanol in contact with norcamphor forms a wetting angle of about 25°, which is similar to reported angles of rock-forming minerals in contact with silicate melt. The experiments in Chapter 4 show that previously reported disequilibrium features such as trapped liquid lenses, fully-wetted grain boundaries, and large liquid pockets can be explained by the interplay of the liquid with ongoing recrystallization. Closer inspection of dihedral angles in Chapter 5 reveals that the wetting angles are themselves modified by grain coarsening. Ongoing recrystallization constantly moves liquid-filled triple junctions, thereby altering the wetting angles dynamically as a function of the triple junction velocity. A polycrystalline aggregate will therefore always display a range of equilibrium and dynamic wetting angles at raised temperature, rather than a single wetting angle as previously thought. For the deformation experiments partially molten KNO3–LiNO3 experiments were used in addition to norcamphor–ethanol experiments (Chapter 6). Three deformation regimes were observed. At a high bulk liquid fraction >10 vol.% the aggregate deformed by compaction and granular flow. At a “moderate” liquid fraction, the aggregate deformed mainly by grain boundary sliding (GBS) that was localized into conjugate shear zones. At a low liquid fraction, the grains of the aggregate formed a supporting framework that deformed internally by crystal plastic deformation or diffusion creep. Liquid segregation was most efficient during framework deformation, while GBS lead to slow liquid segregation or even liquid dispersion in the deforming areas.
Resumo:
La Tesi analizza le relazioni tra i processi di sviluppo agricolo e l’uso delle risorse naturali, in particolare di quelle energetiche, a livello internazionale (paesi in via di sviluppo e sviluppati), nazionale (Italia), regionale (Emilia Romagna) e aziendale, con lo scopo di valutare l’eco-efficienza dei processi di sviluppo agricolo, la sua evoluzione nel tempo e le principali dinamiche in relazione anche ai problemi di dipendenza dalle risorse fossili, della sicurezza alimentare, della sostituzione tra superfici agricole dedicate all’alimentazione umana ed animale. Per i due casi studio a livello macroeconomico è stata adottata la metodologia denominata “SUMMA” SUstainability Multi-method, multi-scale Assessment (Ulgiati et al., 2006), che integra una serie di categorie d’impatto dell’analisi del ciclo di vita, LCA, valutazioni costi-benefici e la prospettiva di analisi globale della contabilità emergetica. L’analisi su larga scala è stata ulteriormente arricchita da un caso studio sulla scala locale, di una fattoria produttrice di latte e di energia elettrica rinnovabile (fotovoltaico e biogas). Lo studio condotto mediante LCA e valutazione contingente ha valutato gli effetti ambientali, economici e sociali di scenari di riduzione della dipendenza dalle fonti fossili. I casi studio a livello macroeconomico dimostrano che, nonostante le politiche di supporto all’aumento di efficienza e a forme di produzione “verdi”, l’agricoltura a livello globale continua ad evolvere con un aumento della sua dipendenza dalle fonti energetiche fossili. I primi effetti delle politiche agricole comunitarie verso una maggiore sostenibilità sembrano tuttavia intravedersi per i Paesi Europei. Nel complesso la energy footprint si mantiene alta poiché la meccanizzazione continua dei processi agricoli deve necessariamente attingere da fonti energetiche sostitutive al lavoro umano. Le terre agricole diminuiscono nei paesi europei analizzati e in Italia aumentando i rischi d’insicurezza alimentare giacché la popolazione nazionale sta invece aumentando.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
Organic electronics is an emerging field with a vast number of applications having high potential for commercial success. Although an enormous progress has been made in this research area, many organic electronic applications such as organic opto-electronic devices, organic field effect transistors and organic bioelectronic devices still require further optimization to fulfill the requirements for successful commercialization. The main bottle neck that hinders large scale production of these devices is their performances and stability. The performance of the organic devices largely depends on the charge transport processes occurring at the interfaces of various material that it is composed of. As a result, the key ingredient needed for a successful improvement in the performance and stability of organic electronic devices is an in-depth knowledge of the interfacial interactions and the charge transport phenomena taking place at different interfaces. The aim of this thesis is to address the role of the various interfaces between different material in determining the charge transport properties of organic devices. In this framework, I chose an Organic Field Effect Transistor (OFET) as a model system to carry out this study as it An OFET offers various interfaces that can be investigated as it is made up of stacked layers of various material. In order to probe the intrinsic properties that governs the charge transport, we have to be able to carry out thorough investigation of the interactions taking place down at the accumulation layer thickness. However, since organic materials are highly instable in ambient conditions, it becomes quite impossible to investigate the intrinsic properties of the material without the influence of extrinsic factors like air, moisture and light. For this reason, I have employed a technique called the in situ real-time electrical characterization technique which enables electrical characterization of the OFET during the growth of the semiconductor.
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
Image-based modeling of tumor growth combines methods from cancer simulation and medical imaging. In this context, we present a novel approach to adapt a healthy brain atlas to MR images of tumor patients. In order to establish correspondence between a healthy atlas and a pathologic patient image, tumor growth modeling in combination with registration algorithms is employed. In a first step, the tumor is grown in the atlas based on a new multi-scale, multi-physics model including growth simulation from the cellular level up to the biomechanical level, accounting for cell proliferation and tissue deformations. Large-scale deformations are handled with an Eulerian approach for finite element computations, which can operate directly on the image voxel mesh. Subsequently, dense correspondence between the modified atlas and patient image is established using nonrigid registration. The method offers opportunities in atlasbased segmentation of tumor-bearing brain images as well as for improved patient-specific simulation and prognosis of tumor progression.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
Increasingly, regression models are used when residuals are spatially correlated. Prominent examples include studies in environmental epidemiology to understand the chronic health effects of pollutants. I consider the effects of residual spatial structure on the bias and precision of regression coefficients, developing a simple framework in which to understand the key issues and derive informative analytic results. When the spatial residual is induced by an unmeasured confounder, regression models with spatial random effects and closely-related models such as kriging and penalized splines are biased, even when the residual variance components are known. Analytic and simulation results show how the bias depends on the spatial scales of the covariate and the residual; bias is reduced only when there is variation in the covariate at a scale smaller than the scale of the unmeasured confounding. I also discuss how the scales of the residual and the covariate affect efficiency and uncertainty estimation when the residuals can be considered independent of the covariate. In an application on the association between black carbon particulate matter air pollution and birth weight, controlling for large-scale spatial variation appears to reduce bias from unmeasured confounders, while increasing uncertainty in the estimated pollution effect.
Resumo:
Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.
Resumo:
Heat transfer is considered as one of the most critical issues for design and implement of large-scale microwave heating systems, in which improvement of the microwave absorption of materials and suppression of uneven temperature distribution are the two main objectives. The present work focuses on the analysis of heat transfer in microwave heating for achieving highly efficient microwave assisted steelmaking through the investigations on the following aspects: (1) characterization of microwave dissipation using the derived equations, (2) quantification of magnetic loss, (3) determination of microwave absorption properties of materials, (4) modeling of microwave propagation, (5) simulation of heat transfer, and (6) improvement of microwave absorption and heating uniformity. Microwave heating is attributed to the heat generation in materials, which depends on the microwave dissipation. To theoretically characterize microwave heating, simplified equations for determining the transverse electromagnetic mode (TEM) power penetration depth, microwave field attenuation length, and half-power depth of microwaves in materials having both magnetic and dielectric responses were derived. It was followed by developing a simplified equation for quantifying magnetic loss in materials under microwave irradiation to demonstrate the importance of magnetic loss in microwave heating. The permittivity and permeability measurements of various materials, namely, hematite, magnetite concentrate, wüstite, and coal were performed. Microwave loss calculations for these materials were carried out. It is suggested that magnetic loss can play a major role in the heating of magnetic dielectrics. Microwave propagation in various media was predicted using the finite-difference time-domain method. For lossy magnetic dielectrics, the dissipation of microwaves in the medium is ascribed to the decay of both electric and magnetic fields. The heat transfer process in microwave heating of magnetite, which is a typical magnetic dielectric, was simulated by using an explicit finite-difference approach. It is demonstrated that the heat generation due to microwave irradiation dominates the initial temperature rise in the heating and the heat radiation heavily affects the temperature distribution, giving rise to a hot spot in the predicted temperature profile. Microwave heating at 915 MHz exhibits better heating homogeneity than that at 2450 MHz due to larger microwave penetration depth. To minimize/avoid temperature nonuniformity during microwave heating the optimization of object dimension should be considered. The calculated reflection loss over the temperature range of heating is found to be useful for obtaining a rapid optimization of absorber dimension, which increases microwave absorption and achieves relatively uniform heating. To further improve the heating effectiveness, a function for evaluating absorber impedance matching in microwave heating was proposed. It is found that the maximum absorption is associated with perfect impedance matching, which can be achieved by either selecting a reasonable sample dimension or modifying the microwave parameters of the sample.
Resumo:
Synthetic oligonucleotides and peptides have found wide applications in industry and academic research labs. There are ~60 peptide drugs on the market and over 500 under development. The global annual sale of peptide drugs in 2010 was estimated to be $13 billion. There are three oligonucleotide-based drugs on market; among them, the FDA newly approved Kynamro was predicted to have a $100 million annual sale. The annual sale of oligonucleotides to academic labs was estimated to be $700 million. Both bio-oligomers are mostly synthesized on automated synthesizers using solid phase synthesis technology, in which nucleoside or amino acid monomers are added sequentially until the desired full-length sequence is reached. The additions cannot be complete, which generates truncated undesired failure sequences. For almost all applications, these impurities must be removed. The most widely used method is HPLC. However, the method is slow, expensive, labor-intensive, not amendable for automation, difficult to scale up, and unsuitable for high throughput purification. It needs large capital investment, and consumes large volumes of harmful solvents. The purification costs are estimated to be more than 50% of total production costs. Other methods for bio-oligomer purification also have drawbacks, and are less favored than HPLC for most applications. To overcome the problems of known biopolymer purification technologies, we have developed two non-chromatographic purification methods. They are (1) catching failure sequences by polymerization, and (2) catching full-length sequences by polymerization. In the first method, a polymerizable group is attached to the failure sequences of the bio-oligomers during automated synthesis; purification is achieved by simply polymerizing the failure sequences into an insoluble gel and extracting full-length sequences. In the second method, a polymerizable group is attached to the full-length sequences, which are then incorporated into a polymer; impurities are removed by washing, and pure product is cleaved from polymer. These methods do not need chromatography, and all drawbacks of HPLC no longer exist. Using them, purification is achieved by simple manipulations such as shaking and extraction. Therefore, they are suitable for large scale purification of oligonucleotide and peptide drugs, and also ideal for high throughput purification, which currently has a high demand for research projects involving total gene synthesis. The dissertation will present the details about the development of the techniques. Chapter 1 will make an introduction to oligodeoxynucleotides (ODNs), their synthesis and purification. Chapter 2 will describe the detailed studies of using the catching failure sequences by polymerization method to purify ODNs. Chapter 3 will describe the further optimization of the catching failure sequences by polymerization ODN purification technology to the level of practical use. Chapter 4 will present using the catching full-length sequence by polymerization method for ODN purification using acid-cleavable linker. Chapter 5 will make an introduction to peptides, their synthesis and purification. Chapter 6 will describe the studies using the catching full-length sequence by polymerization method for peptide purification.
Resumo:
The procurement of transportation services via large-scale combinatorial auctions involves a couple of complex decisions whose outcome highly influences the performance of the tender process. This paper examines the shipper's task of selecting a subset of the submitted bids which efficiently trades off total procurement cost against expected carrier performance. To solve this bi-objective winner determination problem, we propose a Pareto-based greedy randomized adaptive search procedure (GRASP). As a post-optimizer we use a path relinking procedure which is hybridized with branch-and-bound. Several variants of this algorithm are evaluated by means of artificial test instances which comply with important real-world characteristics. The two best variants prove superior to a previously published Pareto-based evolutionary algorithm.