843 resultados para SCHEDULING OF GRID TASKS
Resumo:
Androgen withdrawal induces hypoxia in androgen-sensitive tissue; this is important as in the tumour microenvironment hypoxia is known to drive malignant progression. This study examined the time-dependent effect of androgen deprivation therapy (ADT) on tumour oxygenation and investigated the role of ADT-induced hypoxia on malignant progression in prostate tumours. LNCaP xenografted tumours were treated with anti-androgens and tumour oxygenation measured. Dorsal skin fold chambers (DSF) were used to image tumour vasculature in vivo. Quantitative PCR (QPCR) identified differential gene expression following treatment with bicalutamide. Bicalutamide and vehicle-only treated tumours were re-established in vitro and invasion and sensitivity to docetaxel were measured. Tumour growth delay was calculated following treatment with bicalutamide combined with the bioreductive drug AQ4N. Tumour oxygenation measurements showed a precipitate decrease following initiation of ADT. A clinically relevant dose of bicalutamide (2mg/kg/day) decreased tumour oxygenation by 45% within 24h, reaching a nadir of 0.09% oxygen (0.67±0.06 mmHg) by day 7; this persisted until day 14 when it increased up to day 28. Using DSF chambers, LNCaP tumours treated with bicalutamide showed loss of small vessels at days 7 and 14 with revascularization occurring by day 21. QPCR showed changes in gene expression consistent with the vascular changes and malignant progression. Cells from bicalutamide-treated tumours were more malignant than vehicle-treated controls. Combining bicalutamide with AQ4N (50mg/kg; single dose) caused greater tumour growth delay than bicalutamide alone. This study shows that bicalutamide-induced hypoxia selects for cells that show malignant progression; targeting hypoxic cells may provide greater clinical benefit.
Resumo:
Autonomic management can be used to improve the QoS provided by parallel/distributed applications. We discuss behavioural skeletons introduced in earlier work: rather than relying on programmer ability to design “from scratch” efficient autonomic policies, we encapsulate general autonomic controller features into algorithmic skeletons. Then we leave to the programmer the duty of specifying the parameters needed to specialise the skeletons to the needs of the particular application at hand. This results in the programmer having the ability to fast prototype and tune distributed/parallel applications with non-trivial autonomic management capabilities. We discuss how behavioural skeletons have been implemented in the framework of GCM(the Grid ComponentModel developed within the CoreGRID NoE and currently being implemented within the GridCOMP STREP project). We present results evaluating the overhead introduced by autonomic management activities as well as the overall behaviour of the skeletons. We also present results achieved with a long running application subject to autonomic management and dynamically adapting to changing features of the target architecture.
Overall the results demonstrate both the feasibility of implementing autonomic control via behavioural skeletons and the effectiveness of our sample behavioural skeletons in managing the “functional replication” pattern(s).
Resumo:
Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.
Resumo:
Cloud computing is increasingly being adopted in different scenarios, like social networking, business applications, scientific experiments, etc. Relying in virtualization technology, the construction of these computing environments targets improvements in the infrastructure, such as power-efficiency and fulfillment of users’ SLA specifications. The methodology usually applied is packing all the virtual machines on the proper physical servers. However, failure occurrences in these networked computing systems can induce substantial negative impact on system performance, deviating the system from ours initial objectives. In this work, we propose adapted algorithms to dynamically map virtual machines to physical hosts, in order to improve cloud infrastructure power-efficiency, with low impact on users’ required performance. Our decision making algorithms leverage proactive fault-tolerance techniques to deal with systems failures, allied with virtual machine technology to share nodes resources in an accurately and controlled manner. The results indicate that our algorithms perform better targeting power-efficiency and SLA fulfillment, in face of cloud infrastructure failures.
Resumo:
In the proposed model, the independent system operator (ISO) provides the opportunity for maintenance outage rescheduling of generating units before each short-term (ST) time interval. Long-term (LT) scheduling for 1 or 2 years in advance is essential for the ISO and the generation companies (GENCOs) to decide their LT strategies; however, it is not possible to be exactly followed and requires slight adjustments. The Cournot-Nash equilibrium is used to characterize the decision-making procedure of an individual GENCO for ST intervals considering the effective coordination with LT plans. Random inputs, such as parameters of the demand function of loads, hourly demand during the following ST time interval and the expected generation pattern of the rivals, are included as scenarios in the stochastic mixed integer program defined to model the payoff-maximizing objective of a GENCO. Scenario reduction algorithms are used to deal with the computational burden. Two reliability test systems were chosen to illustrate the effectiveness of the proposed model for the ST decision-making process for future planned outages from the point of view of a GENCO.
Resumo:
Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.
Resumo:
Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ⊆R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors 4×(1+MAXP×⌈|P|×MAXPmin{m1,m2,…,mt}⌉) times as fast. (Here MAXP and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to 4×(1+⌈|R|min{m1,m2,…,mt}⌉). To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.
Optimal Methodology for Synchronized Scheduling of Parallel Station Assembly with Air Transportation
Resumo:
We present an optimal methodology for synchronized scheduling of production assembly with air transportation to achieve accurate delivery with minimized cost in consumer electronics supply chain (CESC). This problem was motivated by a major PC manufacturer in consumer electronics industry, where it is required to schedule the delivery requirements to meet the customer needs in different parts of South East Asia. The overall problem is decomposed into two sub-problems which consist of an air transportation allocation problem and an assembly scheduling problem. The air transportation allocation problem is formulated as a Linear Programming Problem with earliness tardiness penalties for job orders. For the assembly scheduling problem, it is basically required to sequence the job orders on the assembly stations to minimize their waiting times before they are shipped by flights to their destinations. Hence the second sub-problem is modelled as a scheduling problem with earliness penalties. The earliness penalties are assumed to be independent of the job orders.
Resumo:
Resumen tomado de la publicaci??n