24 resultados para Steiner tree problem
em Greenwich Academic Literature Archive - UK
Resumo:
The concept of 'nested methods' is adopted to solve the location-routeing problem. Unlike the sequential and iterative approaches, in this method we treat the routeing element as a sub-problem within the larger problem of location. Efficient techniques that take into account the above concept and which use a neighbourhood structure inspired from computational geometry are presented. A simple version of tabu search is also embedded into our methods to improve the solutions further. Computational testing is carried out on five sets of problems of 400 customers with five levels of depot fixed costs, and the results obtained are encouraging.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
In this paper the many to many location routing problem is introduced, and its relationship to various problems in distribution management is emphasised. Useful mathematical formulations which can be easily extended to cater for other related problems are produced. Techniques for tackling this complex distribution problem are also outlined.
Resumo:
Data from three forest sites in Sumatra (Batang Ule, Pasirmayang and Tebopandak) have been analysed and compared for the effects of sample area cut-off, and tree diameter cut-off. An 'extended inverted exponential model' is shown to be well suited to fitting tree-species-area curves. The model yields species carrying capacities of 680 for Batang Ule, 380 species for Pasirmayang, and 35 for Tebopandak (tree diameter >10cm). It would seem that in terms of species carrying capacity, Tebopandak and Pasirmayang are rather similar, and both less diverse than the hilly Batang Ule site. In terms of conservation policy, this would mean that rather more emphasis should be put on conserving hilly sites on a granite substratum. For Pasirmayang with tree diameter >3cm, the asymptotic species number estimate is 567, considerably higher than the estimate of 387 species for trees with diameter >10cm. It is clear that the diameter cut-off has a major impact on the estimate of the species carrying capacity. A conservative estimate of the total number of tree species in the Pasirmayang region is 632 species! In sampling exercises, the diameter cut-off should not be chosen lightly, and it may be worth adopting field sampling procedures which involve some subsampling of the primary sample area, where the diameter cut-off is set much lower than in the primary plots.
Resumo:
The main interest in the assessment of forest species diversity for conservation purposes is in the rare species. The main problem in the tropical rain forests is that most of the species are rare. Assessment of species diversity in the tropical rain forests is therefore often concerned with estimating that which is not observed in recorded samples. Statistical methodology is therefore required to try to estimate the truncated tail of the species frequency distribution, or to estimate the asymptote of species/diversity-area curves. A Horvitz-Thompson estimator of the number of unobserved (“virtual”) species in each species intensity class is proposed. The approach allows a definition of an extended definition of diversity, ( or generalised Renyi entropy). The paper presents a case study from data collected in Jambi, Sumatra, and the “extended diversity measure” is used on the species data.
Resumo:
This paper presents a genetic algorithm for finding a constrained minimum spanning tree. The problem is of relevance in the design of minimum cost communication networks, where there is a need to connect all the terminals at a user site to a terminal concentrator in a multipoint (tree) configuration, while ensuring that link capacity constraints are not violated. The approach used maintains a distinction between genotype and phenotype, which produces superior results to those found using a direct representation in a previous study.
Resumo:
The paper considers the job shop scheduling problem to minimize the makespan. It is assumed that each job consists of at most two operations, one of which is to be processed on one of m⩾2 machines, while the other operation must be performed on a single bottleneck machine, the same for all jobs. For this strongly NP-hard problem we present two heuristics with improved worst-case performance. One of them guarantees a worst-case performance ratio of 3/2. The other algorithm creates a schedule with the makespan that exceeds the largest machine workload by at most the length of the largest operation.
Resumo:
This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.
Resumo:
This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.
Resumo:
The paper considers a problem of scheduling n jobs in a two-machine open shop to minimize the makespan, provided that preemption is not allowed and the interstage transportation times are involved. This problem is known to be unary NP-hard. We present an algorithm that requires O (n log n) time and provides a worst-case performance ratio of 3/2.
Resumo:
We motivate, derive, and implement a multilevel approach to the travelling salesman problem.The resulting algorithm progressively coarsens the problem, initialises a tour, and then employs either the Lin-Kernighan (LK) or the Chained Lin-Kernighan (CLK) algorithm to refine the solution on each of the coarsened problems in reverse order.In experiments on a well-established test suite of 80 problem instances we found multilevel configurations that either improved the tour quality by over 25% as compared to the standard CLK algorithm using the same amount of execution time, or that achieved approximately the same tour quality over seven times more rapidly. Moreover, the multilevel variants seem to optimise far better the more clustered instances with which the LK and CLK algorithms have the most difficulties.
Resumo:
Numerical solutions of realistic 2-D and 3-D inverse problems may require a very large amount of computation. A two-level concept on parallelism is often used to solve such problems. The primary level uses the problem partitioning concept which is a decomposition based on the mathematical/physical problem. The secondary level utilizes the widely used data partitioning concept. A theoretical performance model is built based on the two-level parallelism. The observed performance results obtained from a network of general purpose Sun Sparc stations are compared with the theoretical values. Restrictions of the theoretical model are also discussed.
Resumo:
The SB distributional model of Johnson's 1949 paper was introduced by a transformation to normality, that is, z ~ N(0, 1), consisting of a linear scaling to the range (0, 1), a logit transformation, and an affine transformation, z = γ + δu. The model, in its original parameterization, has often been used in forest diameter distribution modelling. In this paper, we define the SB distribution in terms of the inverse transformation from normality, including an initial linear scaling transformation, u = γ′ + δ′z (δ′ = 1/δ and γ′ = �γ/δ). The SB model in terms of the new parameterization is derived, and maximum likelihood estimation schema are presented for both model parameterizations. The statistical properties of the two alternative parameterizations are compared empirically on 20 data sets of diameter distributions of Changbai larch (Larix olgensis Henry). The new parameterization is shown to be statistically better than Johnson's original parameterization for the data sets considered here.