979 resultados para Nottingham
Resumo:
The OPIT program is briefly described. OPIT is a basis-set-optimising, self-consistent field, molecular orbital program for calculating properties of closed-shell ground states of atoms and molecules. A file handling technique is then put forward which enables core storage to be used efficiently in large FORTRAN scientific applications programs. Hashing and list processing techniques, of the type frequently used in writing system software and computer operating systems, are here applied to the creation of data files (integral label and value lists etc.). Files consist of a chained series of blocks which may exist in core or on backing store or both. Efficient use of core store is achieved and the processes of file deletion, file re-writing and garbage collection of unused blocks can be easily arranged. The scheme is exemplified with reference to the OPIT program. A subsequent paper will describe a job scheduling scheme for large programs of this sort.
Resumo:
In this paper we carry out an investigation of some of the major features of exam timetabling problems with a view to developing a similarity measure. This similarity measure will be used within a case-based reasoning (CBR) system to match a new problem with one from a case-based of previously solved problems. The case base will also store the heuristic for meta-heuristic techniques applied most successfully to each problem stored. The technique(s) stored with the matched case will be retrieved and applied to the new case. The CBR assumption in our system is that similar problems can be solved equally well by the same technique.
Resumo:
A large number of heuristic algorithms have been developed over the years which have been aimed at solving examination timetabling problems. However, many of these algorithms have been developed specifically to solve one particular problem instance or a small subset of instances related to a given real-life problem. Our aim is to develop a more general system which, when given any exam timetabling problem, will produce results which are comparative to those of a specially designed heuristic for that problem. We are investigating a Case based reasoning (CBR) technique to select from a set of algorithms which have been applied successfully to similar problem instances in the past. The assumption in CBR is that similar problems have similar solutions. For our system, the assumption is that an algorithm used to find a good solution to one problem will also produce a good result for a similar problem. The key to the success of the system will be our definition of similarity between two exam timetabling problems. The study will be carried out by running a series of tests using a simple Simulated Annealing Algorithm on a range of problems with differing levels of similarity and examining the data sets in detail. In this paper an initial investigation of the key factors which will be involved in this measure is presented with a discussion of how the definition of good impacts on this.
Resumo:
This paper presents an investigation of a simple generic hyper-heuristic approach upon a set of widely used constructive heuristics (graph coloring heuristics) in timetabling. Within the hyperheuristic framework, a Tabu Search approach is employed to search for permutations of graph heuristics which are used for constructing timetables in exam and course timetabling problems. This underpins a multi-stage hyper-heuristic where the Tabu Search employs permutations upon a different number of graph heuristics in two stages. We study this graph-based hyper-heuristic approach within the context of exploring fundamental issues concerning the search space of the hyper-heuristic (the heuristic space) and the solution space. Such issues have not been addressed in other hyper-heuristic research. These approaches are tested on both exam and course benchmark timetabling problems and are compared with the fine-tuned bespoke state-of-the-art approaches. The results are within the range of the best results reported in the literature. The approach described here represents a significantly more generally applicable approach than the current state of the art in the literature. Future work will extend this hyper-heuristic framework by employing methodologies which are applicable on a wider range of timetabling and scheduling problems.
Resumo:
The structured representation of cases by attribute graphs in a Case-Based Reasoning (CBR) system for course timetabling has been the subject of previous research by the authors. In that system, the case base is organised as a decision tree and the retrieval process chooses those cases which are sub attribute graph isomorphic to the new case. The drawback of that approach is that it is not suitable for solving large problems. This paper presents a multiple-retrieval approach that partitions a large problem into small solvable sub-problems by recursively inputting the unsolved part of the graph into the decision tree for retrieval. The adaptation combines the retrieved partial solutions of all the partitioned sub-problems and employs a graph heuristic method to construct the whole solution for the new case. We present a methodology which is not dependant upon problem specific information and which, as such, represents an approach which underpins the goal of building more general timetabling systems. We also explore the question of whether this multiple-retrieval CBR could be an effective initialisation method for local search methods such as Hill Climbing, Tabu Search and Simulated Annealing. Significant results are obtained from a wide range of experiments. An evaluation of the CBR system is presented and the impact of the approach on timetabling research is discussed. We see that the approach does indeed represent an effective initialisation method for these approaches.
Resumo:
This paper presents a case-based heuristic selection approach for automated university course and exam timetabling. The method described in this paper is motivated by the goal of developing timetabling systems that are fundamentally more general than the current state of the art. Heuristics that worked well in previous similar situations are memorized in a case base and are retrieved for solving the problem in hand. Knowledge discovery techniques are employed in two distinct scenarios. Firstly, we model the problem and the problem solving situations along with specific heuristics for those problems. Secondly, we refine the case base and discard cases which prove to be non-useful in solving new problems. Experimental results are presented and analyzed. It is shown that case based reasoning can act effectively as an intelligent approach to learn which heuristics work well for particular timetabling situations. We conclude by outlining and discussing potential research issues in this critical area of knowledge discovery for different difficult timetabling problems.
Resumo:
Information concerning the run-time behaviour of programs ("program profiling") can be of the greatest assistance in improving program efficiency. Two software devices have been developed for use on ICL 1900 Series machines to provide such information. DIDYMUS is probabilistic in approach and uses multi- tasking facilities to sample the instruction addresses used by a program at run time. It will work regardless of the source language of the program and matches the detected addresses against a loader map to produce a histogram. SCAMP is restricted to profiling Algol 68-R programs, but provides deterministic information concerning those language constructs that are monitored. Procedure calls to appropriate counting routines are inserted into the source text in a pre-pass prior to compilation. The profile information is printed out at the end of the program run. It has been found that these two approaches complement each other very effectively.
Resumo:
This paper presents our work on analysing the high level search within a graph based hyperheuristic. The graph based hyperheuristic solves the problem at a higher level by searching through permutations of graph heuristics rather than the actual solutions. The heuristic permutations are then used to construct the solutions. Variable Neighborhood Search, Steepest Descent, Iterated Local Search and Tabu Search are compared. An analysis of their performance within the high level search space of heuristics is also carried out. Experimental results on benchmark exam timetabling problems demonstrate the simplicity and efficiency of this hyperheuristic approach. They also indicate that the choice of the high level search methodology is not crucial and the high level search should explore the heuristic search space as widely as possible within a limited searching time. This simple and general graph based hyperheuristic may be applied to a range of timetabling and optimisation problems.
Resumo:
To understand the evolution of bipedalism among the homnoids in an ecological context we need to be able to estimate theenerrgetic cost of locomotion in fossil forms. Ideally such an estimate would be based entirely on morphology since, except for the rare instances where footprints are preserved, this is hte only primary source of evidence available. In this paper we use evolutionary robotics techniques (genetic algoritms, pattern generators and mechanical modeling) to produce a biomimentic simulation of bipedalism based on human body dimensions. The mechnaical simulation is a seven-segment, two-dimensional model with motive force provided by tension generators representing the major muscle groups acting around the lower-limb joints. Metabolic energy costs are calculated from the muscel model, and bipedal gait is generated using a finite-state pattern generator whose parameters are produced using a genetic algorithm with locomotor economy (maximum distance for a fixed energy cost) as the fitness criterion. The model is validated by comparing the values it generates with those for modern humans. The result (maximum efficiency of 200 J m-1) is within 15% of the experimentally derived value, which is very encouraging and suggests that this is a useful analytic technique for investigating the locomotor behaviour of fossil forms. Initial work suggests that in the future this technique could be used to estimate other locomotor parameters such as top speed. In addition, the animations produced by this technique are qualitatively very convincing, which suggests that this may also be a useful technique for visualizing bipedal locomotion.
Resumo:
This article analyzes how the selection process for the executive affects the risk of rebellion and insurgencies in sub-Saharan Africa between 1971 and 1995. Four executive recruitment processes are distinguished, which are characteristic for the African context: (1) a process without elections, (2) single candidate elections, (3) single party, multiple candidate elections, and (4) multiparty executive elections. The results suggest that single candidate elections and multiparty elections substantially reduce the risk of insurgencies compared to systems without any kind of executive elections. They further show that during times of political instability the risk of large-scale violent dissent increases substantially. The article supports findings of the civil war literature that higher levels of income are associated with a lower risk of intrastate violence, while oil-exporting countries are at a higher risk of rebellion. In short, this article further strengthens the need to use more specific measures of elements of political regimes, which also take into account regional particularities, in order to paint a more informative picture of how political structures influence the risk of internal violence.
Resumo:
Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.