37 resultados para Coligny, Gaspard de, seigneur de Châtillon, 1519-1572
em Greenwich Academic Literature Archive - UK
Resumo:
The paper considers a scheduling model that generalizes the well-known open shop, flow shop, and job shop models. For that model, called the super shop, we study the complexity of finding a time-optimal schedule in both preemptive and non-preemptive cases assuming that precedence constraints are imposed over the set of jobs. Two types of precedence rela-tions are considered. Most of the arising problems are proved to be NP-hard, while for some of them polynomial-time algorithms are presented.
Resumo:
The paper presents an improved version of the greedy open shop approximation algorithm with pre-ordering of jobs. It is shown that the algorithm compares favorably with the greedy algorithm with no pre-ordering by reducing either its absolute or relative error. In the case of three machines, the new algorithm creates a schedule with the makespan that is at most 3/2 times the optimal value.
Resumo:
This paper considers the problem of sequencing n jobs in a two‐machine re‐entrant shopwith the objective of minimizing the maximum completion time. The shop consists of twomachines, M1 and M2 , and each job has the processing route (M1 , M2 , M1 ). An O(n log n)time heuristic is presented which generates a schedule with length at most 4/3 times that ofan optimal schedule, thereby improving the best previously available worst‐case performanceratio of 3/2.
Resumo:
This paper considers the problem of minimizing the schedule length of a two-machine shop in which not only can a job be assigned any of the two possible routes, but also the processing times depend on the chosen route. This problem is known to be NP-hard. We describe a simple approximation algorithm that guarantees a worst-case performance ratio of 2. We also present some modifications to this algorithm that improve its performance and guarantee a worst-case performance ratio of 3=2.
Resumo:
The paper considers the three‐machine open shop scheduling problem to minimize themakespan. It is assumed that each job consists of at most two operations, one of which is tobe processed on the bottleneck machine, the same for all jobs. A new lower bound on theoptimal makespan is derived, and a linear‐time algorithm for finding an optimalnon‐preemptive schedule is presented.
Resumo:
The adsorption of a C60 monolayer on a graphite substrate was modelled via molecular dynamics simulation covering a significant period of 160 picoseconds. The final configuration of C60s agrees closely with that observed in a scanning tunnelling microscopy (STM) experiment. Clusters of adsorbed molecules were then selected and their STM-like images were computed via the Keldysh Green function method.
Resumo:
The paper deals with the determination of an optimal schedule for the so-called mixed shop problem when the makespan has to be minimized. In such a problem, some jobs have fixed machine orders (as in the job-shop), while the operations of the other jobs may be processed in arbitrary order (as in the open-shop). We prove binary NP-hardness of the preemptive problem with three machines and three jobs (two jobs have fixed machine orders and one may have an arbitrary machine order). We answer all other remaining open questions on the complexity status of mixed-shop problems with the makespan criterion by presenting different polynomial and pseudopolynomial algorithms.
Resumo:
Lennart Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf of degrees of evidential strength. This paper reformulates Åqvist's model in terms of the probabilistic version of the kappa calculus. Proving its acceptability in the legal context is beyond the present scope, but the epistemological debate about Bayesian Law isclearly relevant. While the present model is a possible link to that lineof inquiry, we offer some considerations about the broader picture of thepotential of AI & Law in the evidentiary context. Whereas probabilisticreasoning is well-researched in AI, calculations about the threshold ofpersuasion in litigation, whatever their value, are just the tip of theiceberg. The bulk of the modeling desiderata is arguably elsewhere, if one isto ideally make the most of AI's distinctive contribution as envisaged forlegal evidence research.
Resumo:
Review of: Handbook of Psychology in Legal Contexts. Ray Bull and David Carson (eds.) Wiley-Blackwell. 1999.
Resumo:
This Acknowledgement refers to the special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
This special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
This paper presents data relating to occupant pre-evacuation times from university and hospital outpatient facilities. Although the two occupancies are entirely different, they do employ relatively similar procedures: members of staff sweep areas to encourage individuals to evacuate.However the manner in which the dependent population reacts to these procedures is quite different. In the hospital case, the patients only evacuated once a member of the nursing staff had instructed them to do so, while in the university evacuation, the students were less dependent upon the actions of the staff, with over 50% of them evacuating with no prior prompting. In addition, the student pre-evacuation time was found to be dependent on their level of engagement in various activities.
Resumo:
On the 19 June 2001, a Thames passenger/tour boat underwent several evacuation trials. This work was conducted in order to collect data for the validation of marine-based computer models. The trials involved 111 participants who were distributed throughout the vessel. The boat had two decks and two points of exit from the lower deck placed on either side of the craft, forward and aft. The boat had a twin set of staircases towards the rear of the craft, just forward of the rear exits. maritimeEXODUS was used to simulate the full-scale evacuation trials conducted. The simulation times generated were compared against the original results and categorised according to the exit point availability. The predictions closely approximate the original results, differing by an average of 6.6% across the comparisons, with numerous qualitative similarities between the predictions and experimental results. The maritimeEXODUS evacuation model was then used to examine the evacuation procedure currently employed on the vessel. This was found to have potential to produce long evacuation times. maritimeEXODUS was used to suggest modifications to the mustering procedures. These theoretical results suggest that it is possible to significantly reduce evacuation times.
Resumo:
This paper considers a variant of the classical problem of minimizing makespan in a two-machine flow shop. In this variant, each job has three operations, where the first operation must be performed on the first machine, the second operation can be performed on either machine but cannot be preempted, and the third operation must be performed on the second machine. The NP-hard nature of the problem motivates the design and analysis of approximation algorithms. It is shown that a schedule in which the operations are sequenced arbitrarily, but without inserted machine idle time, has a worst-case performance ratio of 2. Also, an algorithm that constructs four schedules and selects the best is shown to have a worst-case performance ratio of 3/2. A polynomial time approximation scheme (PTAS) is also presented.
Resumo:
We consider the multilevel paradigm and its potential to aid the solution of combinatorial optimisation problems. The multilevel paradigm is a simple one, which involves recursive coarsening to create a hierarchy of approximations to the original problem. An initial solution is found (sometimes for the original problem, sometimes the coarsest) and then iteratively refined at each level. As a general solution strategy, the multilevel paradigm has been in use for many years and has been applied to many problem areas (most notably in the form of multigrid techniques). However, with the exception of the graph partitioning problem, multilevel techniques have not been widely applied to combinatorial optimisation problems. In this paper we address the issue of multilevel refinement for such problems and, with the aid of examples and results in graph partitioning, graph colouring and the travelling salesman problem, make a case for its use as a metaheuristic. The results provide compelling evidence that, although the multilevel framework cannot be considered as a panacea for combinatorial problems, it can provide an extremely useful addition to the combinatorial optimisation toolkit. We also give a possible explanation for the underlying process and extract some generic guidelines for its future use on other combinatorial problems.