7 resultados para Task Complexity
em Greenwich Academic Literature Archive - UK
Resumo:
We survey recent results on the computational complexity of mixed shop scheduling problems. In a mixed shop, some jobs have fixed machine orders (as in the job shop), while the operations of the other jobs may be processed in arbitrary order (as in the open shop). The main attention is devoted to establishing the boundary between polynomially solvable and NP-hard problems. When the number of operations per job is unlimited, we focus on problems with a fixed number of jobs.
Resumo:
This paper presents work on document retrieval based on first time participation in the CLEF 2001 monolingual retrieval task using French. The experiment findings indicated that Okapi, the text retrieval system in use, can successfully be used for non-English text retrieval. A lot of internal pre-processing is required in the basic search system for conversion into Okapi access formats. Various shell scripts were written to achieve the conversion in a UNIX environment, failure of which would significantly have impeded the overall performance. Based on the experiment findings using Okapi - originally designed for English - it was clear that, although most European languages share conventional word boundaries and variant word morphemes formed by the additon of suffixes, there is significant difference between French and English retrieval depending on the adaptation of indexing and search strategies in use. No sophisticated method for higher recall and precision such as stemming techniques, phrase translation or de-compounding was employed for the experiment and our results were suggestively poor. Future participation would include more refined query translation tools.
Resumo:
This paper presents a proactive approach to load sharing and describes the architecture of a scheme, Concert, based on this approach. A proactive approach is characterized by a shift of emphasis from reacting to load imbalance to avoiding its occurrence. In contrast, in a reactive load sharing scheme, activity is triggered when a processing node is either overloaded or underloaded. The main drawback of this approach is that a load imbalance is allowed to develop before costly corrective action is taken. Concert is a load sharing scheme for loosely-coupled distributed systems. Under this scheme, load and task behaviour information is collected and cached in advance of when it is needed. Concert uses Linux as a platform for development. Implemented partially in kernel space and partially in user space, it achieves transparency to users and applications whilst keeping the extent of kernel modifications to a minimum. Non-preemptive task transfers are used exclusively, motivated by lower complexity, lower overheads and faster transfers. The goal is to minimize the average response-time of tasks. Concert is compared with other schemes by considering the level of transparency it provides with respect to users, tasks and the underlying operating system.
Resumo:
This paper presents an investigation into dynamic self-adjustment of task deployment and other aspects of self-management, through the embedding of multiple policies. Non-dedicated loosely-coupled computing environments, such as clusters and grids are increasingly popular platforms for parallel processing. These abundant systems are highly dynamic environments in which many sources of variability affect the run-time efficiency of tasks. The dynamism is exacerbated by the incorporation of mobile devices and wireless communication. This paper proposes an adaptive strategy for the flexible run-time deployment of tasks; to continuously maintain efficiency despite the environmental variability. The strategy centres on policy-based scheduling which is informed by contextual and environmental inputs such as variance in the round-trip communication time between a client and its workers and the effective processing performance of each worker. A self-management framework has been implemented for evaluation purposes. The framework integrates several policy-controlled, adaptive services with the application code, enabling the run-time behaviour to be adapted to contextual and environmental conditions. Using this framework, an exemplar self-managing parallel application is implemented and used to investigate the extent of the benefits of the strategy
Resumo:
Accurate representation of the coupled effects between turbulent fluid flow with a free surface, heat transfer, solidification, and mold deformation has been shown to be necessary for the realistic prediction of several defects in castings and also for determining the final crystalline structure. A core component of the computational modeling of casting processes involves mold filling, which is the most computationally intensive aspect of casting simulation at the continuum level. Considering the complex geometries involved in shape casting, the evolution of the free surface, gas entrapment, and the entrainment of oxide layers into the casting make this a very challenging task in every respect. Despite well over 30 years of effort in developing algorithms, this is by no means a closed subject. In this article, we will review the full range of computational methods used, from unstructured finite-element (FE) and finite-volume (FV) methods through fully structured and block-structured approaches utilizing the cut-cell family of techniques to capture the geometric complexity inherent in shape casting. This discussion will include the challenges of generating rapid solutions on high-performance parallel cluster technology and how mold filling links in with the full spectrum of physics involved in shape casting. Finally, some indications as to novel techniques emerging now that can address genuinely arbitrarily complex geometries are briefly outlined and their advantages and disadvantages are discussed.
Resumo:
Evaluating ship layout for human factors (HF) issues using simulation software such as maritimeEXODUS can be a long and complex process. The analysis requires the identification of relevant evaluation scenarios; encompassing evacuation and normal operations; the development of appropriate measures which can be used to gauge the performance of crew and vessel and finally; the interpretation of considerable simulation data. Currently, the only agreed guidelines for evaluating HFs performance of ship design relate to evacuation and so conclusions drawn concerning the overall suitability of a ship design by one naval architect can be quite different from those of another. The complexity of the task grows as the size and complexity of the vessel increases and as the number and type of evaluation scenarios considered increases. Equally, it can be extremely difficult for fleet operators to set HFs design objectives for new vessel concepts. The challenge for naval architects is to develop a procedure that allows both accurate and rapid assessment of HFs issues associated with vessel layout and crew operating procedures. In this paper we present a systematic and transparent methodology for assessing the HF performance of ship design which is both discriminating and diagnostic. The methodology is demonstrated using two variants of a hypothetical naval ship.