11 resultados para Single-commodity capacitated network design problem
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.
Resumo:
Over the past few years, the field of global optimization has been very active, producing different kinds of deterministic and stochastic algorithms for optimization in the continuous domain. These days, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. Differential evolution (DE) algorithms are a family of evolutionary optimization techniques that use a rather greedy and less stochastic approach to problem solving, when compared to classical evolutionary algorithms. The main idea is to construct, at each generation, for each element of the population a mutant vector, which is constructed through a specific mutation operation based on adding differences between randomly selected elements of the population to another element. Due to its simple implementation, minimum mathematical processing and good optimization capability, DE has attracted attention. This paper proposes a new approach to solve electromagnetic design problems that combines the DE algorithm with a generator of chaos sequences. This approach is tested on the design of a loudspeaker model with 17 degrees of freedom, for showing its applicability to electromagnetic problems. The results show that the DE algorithm with chaotic sequences presents better, or at least similar, results when compared to the standard DE algorithm and other evolutionary algorithms available in the literature.
Resumo:
In this paper, a general scheme for generating extra cuts during the execution of a Benders decomposition algorithm is presented. These cuts are based on feasible and infeasible master problem solutions generated by means of a heuristic. This article includes general guidelines and a case study with a fixed charge network design problem. Computational tests with instances of this problem show the efficiency of the strategy. The most important aspect of the proposed ideas is their generality, which allows them to be used in virtually any Benders decomposition implementation.
Resumo:
Setup operations are significant in some production environments. It is mandatory that their production plans consider some features, as setup state conservation across periods through setup carryover and crossover. The modelling of setup crossover allows more flexible decisions and is essential for problems with long setup times. This paper proposes two models for the capacitated lot-sizing problem with backlogging and setup carryover and crossover. The first is in line with other models from the literature, whereas the second considers a disaggregated setup variable, which tracks the starting and completion times of the setup operation. This innovative approach permits a more compact formulation. Computational results show that the proposed models have outperformed other state-of-the-art formulation.
Resumo:
Abstract Background Propolis is a natural product of plant resins collected by honeybees (Apis mellifera) from various plant sources. Our previous studies indicated that propolis sensitivity is dependent on the mitochondrial function and that vacuolar acidification and autophagy are important for yeast cell death caused by propolis. Here, we extended our understanding of propolis-mediated cell death in the yeast Saccharomyces cerevisiae by applying systems biology tools to analyze the transcriptional profiling of cells exposed to propolis. Methods We have used transcriptional profiling of S. cerevisiae exposed to propolis. We validated our findings by using real-time PCR of selected genes. Systems biology tools (physical protein-protein interaction [PPPI] network) were applied to analyse the propolis-induced transcriptional bevavior, aiming to identify which pathways are modulated by propolis in S. cerevisiae and potentially influencing cell death. Results We were able to observe 1,339 genes modulated in at least one time point when compared to the reference time (propolis untreated samples) (t-test, p-value 0.01). Enrichment analysis performed by Gene Ontology (GO) Term finder tool showed enrichment for several biological categories among the genes up-regulated in the microarray hybridization such as transport and transmembrane transport and response to stress. Real-time RT-PCR analysis of selected genes showed by our microarray hybridization approach was capable of providing information about S. cerevisiae gene expression modulation with a considerably high level of confidence. Finally, a physical protein-protein (PPPI) network design and global topological analysis stressed the importance of these pathways in response of S. cerevisiae to propolis and were correlated with the transcriptional data obtained thorough the microarray analysis. Conclusions In summary, our data indicate that propolis is largely affecting several pathways in the eukaryotic cell. However, the most prominent pathways are related to oxidative stress, mitochondrial electron transport chain, vacuolar acidification, regulation of macroautophagy associated with protein target to vacuole, cellular response to starvation, and negative regulation of transcription from RNA polymerase II promoter. Our work emphasizes again the importance of S. cerevisiae as a model system to understand at molecular level the mechanism whereby propolis causes cell death in this organism at the concentration herein tested. Our study is the first one that investigates systematically by using functional genomics how propolis influences and modulates the mRNA abundance of an organism and may stimulate further work on the propolis-mediated cell death mechanisms in fungi.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.
Resumo:
We present a family of networks whose local interconnection topologies are generated by the root vectors of a semi-simple complex Lie algebra. Cartan classification theorem of those algebras ensures those families of interconnection topologies to be exhaustive. The global arrangement of the network is defined in terms of integer or half-integer weight lattices. The mesh or torus topologies that network millions of processing cores, such as those in the IBM BlueGene series, are the simplest member of that category. The symmetries of the root systems of an algebra, manifested by their Weyl group, lends great convenience for the design and analysis of hardware architecture, algorithms and programs.
Resumo:
Background: Carpal tunnel syndrome is the most common neuropathy in the upper extremity, resulting from the compression of the median nerve at wrist level. Clinical studies are essentials to present evidence on therapeutic resources use at early restoration on peripheral nerve functionality. Low-level laser therapy has been widely investigated in researches related to nerve regeneration. Therefore, it is suggested that the effect of low-level laser therapy associated with other conservative rehabilitation techniques may positively affect symptoms and overall hand function in compressive neuropathies such as carpal tunnel syndrome. The aim of this study is to evaluate the effectiveness of low-level laser therapy in addition to orthoses therapy and home orientations in patients with carpal tunnel syndrome. Methods/Design: Patients older than 18 years old will be included, with clinical diagnosis of carpal tunnel syndrome, excluding comorbidies. A physiotherapist will conduct intervention, with a blinding evaluator. Randomization will be applied to allocate the patients in each group: with association or not to low-level laser therapy. All of them will be submitted to orthoses therapy and home orientations. Outcome will be assessed through: pain visual analogic scale, Semmes Weinstein monofilaments (TM) threshold sensibility test, Pinch Gauge T, Boston Carpal Tunnel Questionnaire and two point discrimination test. Discussion: This paper describes the design of a randomized controlled trial, which aim to assess the effectiveness of conservative treatment added to low-level laser therapy for patients with carpal tunnel syndrome. Trial registration: Brazilian Clinical Trials Registry (ReBec) - 75ddtf / Universal Trial Number: U1111-1121-5184
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
In deterministic optimization, the uncertainties of the structural system (i.e. dimension, model, material, loads, etc) are not explicitly taken into account. Hence, resulting optimal solutions may lead to reduced reliability levels. The objective of reliability based design optimization (RBDO) is to optimize structures guaranteeing that a minimum level of reliability, chosen a priori by the designer, is maintained. Since reliability analysis using the First Order Reliability Method (FORM) is an optimization procedure itself, RBDO (in its classical version) is a double-loop strategy: the reliability analysis (inner loop) and the structural optimization (outer loop). The coupling of these two loops leads to very high computational costs. To reduce the computational burden of RBDO based on FORM, several authors propose decoupling the structural optimization and the reliability analysis. These procedures may be divided in two groups: (i) serial single loop methods and (ii) unilevel methods. The basic idea of serial single loop methods is to decouple the two loops and solve them sequentially, until some convergence criterion is achieved. On the other hand, uni-level methods employ different strategies to obtain a single loop of optimization to solve the RBDO problem. This paper presents a review of such RBDO strategies. A comparison of the performance (computational cost) of the main strategies is presented for several variants of two benchmark problems from the literature and for a structure modeled using the finite element method.