916 resultados para heuristic
Resumo:
Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clause-learning and clause-weighting in boolean satisfiability (SAT), nogood and explanation-based learning, and constraint weighting in constraint satisfaction problems (CSPs). Many of the top solvers in SAT use clause learning to good effect. A similar approach (nogood learning) has not had as large an impact in CSPs. Constraint weighting is a less fine-grained approach where the information learnt gives an approximation as to which variables may be the sources of greatest contention. In this work we present two methods for learning from search using restarts, in order to identify these critical variables prior to solving. Both methods are based on the conflict-directed heuristic (weighted-degree heuristic) introduced by Boussemart et al. and are aimed at producing a better-informed version of the heuristic by gathering information through restarting and probing of the search space prior to solving, while minimizing the overhead of these restarts. We further examine the impact of different sampling strategies and different measurements of contention, and assess different restarting strategies for the heuristic. Finally, two applications for constraint weighting are considered in detail: dynamic constraint satisfaction problems and unary resource scheduling problems.
Resumo:
With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.
Resumo:
The thesis examines cultural processes underpinning the emergence, institutionalisation and reproduction of class boundaries in Limerick city. The research aims to bring a new understanding to the contemporary context of the city’s urban regeneration programme. Acknowledging and recognising other contemporary studies of division and exclusion, the thesis creates a distinctive approach which focuses on uncovering the cultural roots of inequality, educational disadvantage, stigma and social exclusion and the dynamics of their social reproduction. Using Bateson’s concept of schismogenesis (1953), the thesis looks to the persistent, but fragmented culture of community and develops a heuristic ‘symbolic order of the city’. This is defined as “…a cultural structure, the meaning making aspect of hierarchy, the categorical structures of world understanding, the way Limerick people understand themselves, their local and larger world” (p. 37). This provides a very different departure point for exploring the basis for urban regeneration in Limerick (and everywhere). The central argument is that if we want to understand the present (multiple) crises in Limerick we need to understand the historical, anthropological and recursive processes underpinning ‘generalised patterns of rivalry and conflict’. In addition to exploring the historical roots of status and stigma in Limerick, the thesis explores the mythopoesis of persistent, recurrent narratives and labels that mark the boundaries of the city’s identities. The thesis examines the cultural and social function of ‘slagging’ as a vernacular and highly particularised form of ironic, ritualised and, often, ‘cruel’ medium of communication (often exclusion). This is combined with an etymology of the vocabulary of Limerick slang and its mythological base. By tracing the origins of many normalised patterns of Limerick speech ‘sayings’, which have long since forgotten their roots, the thesis demonstrates how they perform a significant contemporary function in maintaining and reinforcing symbolic mechanisms of inclusion/exclusion. The thesis combines historical and archival data with biographical interviews, ethnographic data married to a deep historical hermeneutic analysis of this political community.
Resumo:
We describe a general technique for determining upper bounds on maximal values (or lower bounds on minimal costs) in stochastic dynamic programs. In this approach, we relax the nonanticipativity constraints that require decisions to depend only on the information available at the time a decision is made and impose a "penalty" that punishes violations of nonanticipativity. In applications, the hope is that this relaxed version of the problem will be simpler to solve than the original dynamic program. The upper bounds provided by this dual approach complement lower bounds on values that may be found by simulating with heuristic policies. We describe the theory underlying this dual approach and establish weak duality, strong duality, and complementary slackness results that are analogous to the duality results of linear programming. We also study properties of good penalties. Finally, we demonstrate the use of this dual approach in an adaptive inventory control problem with an unknown and changing demand distribution and in valuing options with stochastic volatilities and interest rates. These are complex problems of significant practical interest that are quite difficult to solve to optimality. In these examples, our dual approach requires relatively little additional computation and leads to tight bounds on the optimal values. © 2010 INFORMS.
Resumo:
Previous functional neuroimaging studies of temporal-order memory have investigated memory for laboratory stimuli that are causally unrelated and poor in sensory detail. In contrast, the present functional magnetic resonance imaging (fMRI) study investigated temporal-order memory for autobiographical events that were causally interconnected and rich in sensory detail. Participants took photographs at many campus locations over a period of several hours, and the following day they were scanned while making temporal-order judgments to pairs of photographs from different locations. By manipulating the temporal lag between the two locations in each trial, we compared the neural correlates associated with reconstruction processes, which we hypothesized depended on recollection and contribute mainly to short lags, and distance processes, which we hypothesized to depend on familiarity and contribute mainly to longer lags. Consistent with our hypotheses, parametric fMRI analyses linked shorter lags to activations in regions previously associated with recollection (left prefrontal, parahippocampal, precuneus, and visual cortices), and longer lags with regions previously associated with familiarity (right prefrontal cortex). The hemispheric asymmetry in prefrontal cortex activity fits very well with evidence and theories regarding the contributions of the left versus right prefrontal cortex to memory (recollection vs. familiarity processes) and cognition (systematic vs. heuristic processes). In sum, using a novel photo-paradigm, this study provided the first evidence regarding the neural correlates of temporal-order for autobiographical events.
Resumo:
FUELCON is an expert system for optimized refueling design in nuclear engineering. This task is crucial for keeping down operating costs at a plant without compromising safety. FUELCON proposes sets of alternative configurations of allocation of fuel assemblies that are each positioned in the planar grid of a horizontal section of a reactor core. Results are simulated, and an expert user can also use FUELCON to revise rulesets and improve on his or her heuristics. The successful completion of FUELCON led this research team into undertaking a panoply of sequel projects, of which we provide a meta-architectural comparative formal discussion. In this paper, we demonstrate a novel adaptive technique that learns the optimal allocation heuristic for the various cores. The algorithm is a hybrid of a fine-grained neural network and symbolic computation components. This hybrid architecture is sensitive enough to learn the particular characteristics of the ‘in-core fuel management problem’ at hand, and is powerful enough to use this information fully to automatically revise heuristics, thus improving upon those provided by a human expert.
Resumo:
The paper describes the design of an efficient and robust genetic algorithm for the nuclear fuel loading problem (i.e., refuellings: the in-core fuel management problem) - a complex combinatorial, multimodal optimisation., Evolutionary computation as performed by FUELGEN replaces heuristic search of the kind performed by the FUELCON expert system (CAI 12/4), to solve the same problem. In contrast to the traditional genetic algorithm which makes strong requirements on the representation used and its parameter setting in order to be efficient, the results of recent research results on new, robust genetic algorithms show that representations unsuitable for the traditional genetic algorithm can still be used to good effect with little parameter adjustment. The representation presented here is a simple symbolic one with no linkage attributes, making the genetic algorithm particularly easy to apply to fuel loading problems with differing core structures and assembly inventories. A nonlinear fitness function has been constructed to direct the search efficiently in the presence of the many local optima that result from the constraint on solutions.
Resumo:
In many practical situations, batching of similar jobs to avoid setups is performed while constructing a schedule. This paper addresses the problem of non-preemptively scheduling independent jobs in a two-machine flow shop with the objective of minimizing the makespan. Jobs are grouped into batches. A sequence independent batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. Besides its practical interest, this problem is a direct generalization of the classical two-machine flow shop problem with no grouping of jobs, which can be solved optimally by Johnson's well-known algorithm. The problem under investigation is known to be NP-hard. We propose two O(n logn) time heuristic algorithms. The first heuristic, which creates a schedule with minimum total setup time by forcing all jobs in the same batch to be sequenced in adjacent positions, has a worst-case performance ratio of 3/2. By allowing each batch to be split into at most two sub-batches, a second heuristic is developed which has an improved worst-case performance ratio of 4/3. © 1998 The Mathematical Programming Society, Inc. Published by Elsevier Science B.V.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
We consider two “minimum”NP-hard job shop scheduling problems to minimize the makespan. In one of the problems every job has to be processed on at most two out of three available machines. In the other problem there are two machines, and a job may visit one of the machines twice. For each problem, we define a class of heuristic schedules in which certain subsets of operations are kept as blocks on the corresponding machines. We show that for each problem the value of the makespan of the best schedule in that class cannot be less than 3/2 times the optimal value, and present algorithms that guarantee a worst-case ratio of 3/2.
Resumo:
This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two parallel dedicated machines we prove that the problem of finding an optimal schedule is NP-hard in the strong sense even if all setup times are equal or if all processing times are equal. For the case of m parallel dedicated machines, a simple greedy algorithm is shown to create a schedule with the makespan that is at most twice the optimum value. For the two machine case, an improved heuristic guarantees a tight worst-case ratio of 3/2. We also describe several polynomially solvable cases of the later problem. The two-machine flow shop and the open shop problems with a single server are also shown to be NP-hard in the strong sense. However, we reduce the two-machine flow shop no-wait problem with a single server to the Gilmore-Gomory traveling salesman problem and solve it in polynomial time. (c) 2000 John Wiley & Sons, Inc.
Resumo:
This paper studies the problem of scheduling jobs in a two-machine open shop to minimize the makespan. Jobs are grouped into batches and are processed without preemption. A batch setup time on each machine is required before the first job is processed, and when a machine switches from processing a job in some batch to a job of another batch. For this NP-hard problem, we propose a linear-time heuristic algorithm that creates a group technology schedule, in which no batch is split into sub-batches. We demonstrate that our heuristic is a -approximation algorithm. Moreover, we show that no group technology algorithm can guarantee a worst-case performance ratio less than 5/4.
Resumo:
The paper considers a problem of scheduling n jobs in a two-machine open shop to minimise the makespan, provided that preemption is not allowed and the interstage transportation times are involved. In general, this problem is known to be NP-hard. We present a linear time algorithm that finds an optimal schedule if no transportation time exceeds the smallest of the processing times. We also describe an algorithm that creates a heuristic solution to the problem with job-independent transportation times. Our algorithm provides a worst-case performance ratio of 8/5 if the transportation time of a job depends on the assigned processing route. The ratio reduces to 3/2 if all transportation times are equal.
Resumo:
Graph partitioning divides a graph into several pieces by cutting edges. Very effective heuristic partitioning algorithms have been developed which run in real-time, but it is unknown how good the partitions are since the problem is, in general, NP-complete. This paper reports an evolutionary search algorithm for finding benchmark partitions. Distinctive features are the transmission and modification of whole subdomains (the partitioned units) that act as genes, and the use of a multilevel heuristic algorithm to effect the crossover and mutations. Its effectiveness is demonstrated by improvements on previously established benchmarks.
Resumo:
We describe a heuristic method for drawing graphs which uses a multilevel technique combined with a force-directed placement algorithm. The multilevel process groups vertices to form clusters, uses the clusters to define a new graph and is repeated until the graph size falls below some threshold. The coarsest graph is then given an initial layout and the layout is successively refined on all the graphs starting with the coarsest and ending with the original. In this way the multilevel algorithm both accelerates and gives a more global quality to the force- directed placement. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on a number of examples ranging from 500 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 30 seconds for a 10,000 vertex graph to around 10 minutes for the largest graph. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.