947 resultados para Slot-based task-splitting algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Potentially inappropriate prescribing in older people is common in primary care and can result in increased morbidity, adverse drug events, hospitalizations and mortality. In Ireland, 36% of those aged 70 years or over received at least one potentially inappropriate medication, with an associated expenditure of over €45 million.The main objective of this study is to determine the effectiveness and acceptability of a complex, multifaceted intervention in reducing the level of potentially inappropriate prescribing in primary care.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the crucial problems of fuzzy rule modeling is how to find an optimal or at least a quasi-optimal rule base fro a certain system. In most applications there is no human expert available, or, the result of a human expert's decision is too much subjective and is not reproducible, thus some automatic method to determine the fuzzy rule base must be deployed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider the problem of assigning implicit-deadline sporadic tasks on a heterogeneous multiprocessor platform comprising two different types of processors—such a platform is referred to as two-type platform. We present two low degree polynomial time-complexity algorithms, SA and SA-P, each providing the following guarantee. For a given two-type platform and a task set, if there exists a task assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type (intra-migrative), then (i) using SA, it is guaranteed to find such an assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding a task assignment where tasks are not allowed to migrate between processors (non-migrative) but given a platform in which processors are 1+α times faster. The parameter 0<α≤1 is a property of the task set; it is the maximum of all the task utilizations that are no greater than 1. We evaluate average-case performance of both the algorithms by generating task sets randomly and measuring how much faster processors the algorithms need (which is upper bounded by 1+α/2 for SA and 1+α for SA-P) in order to output a feasible task assignment (intra-migrative for SA and non-migrative for SA-P). In our evaluations, for the vast majority of task sets, these algorithms require significantly smaller processor speedup than indicated by their theoretical bounds. Finally, we consider a special case where no task utilization in the given task set can exceed one and for this case, we (re-)prove the performance guarantees of SA and SA-P. We show, for both of the algorithms, that changing the adversary from intra-migrative to a more powerful one, namely fully-migrative, in which tasks can migrate between processors of any type, does not deteriorate the performance guarantees. For this special case, we compare the average-case performance of SA-P and a state-of-the-art algorithm by generating task sets randomly. In our evaluations, SA-P outperforms the state-of-the-art by requiring much smaller processor speedup and by running orders of magnitude faster.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 Multicore network processors have been playing an increasingly important role in computational processes, which emphasize on scalability and parallelism of the systems, in distributed environments especially in Internet-based delay-sensitive applications. It is an important but unsolved issue, however, to efficiently schedule tasks in network processors with multicore and multithread for improving the system throughput as much as possible. Profiling can gather runtime environment information and guide the compiler to optimize programs through scheduling tasks based on the runtime context. This paper proposes a profiling-based task scheduling approach, targeting on improving the throughput of multicore network processor (Intel IXP) systems in the balanced pipeline way. In this work, we investigate a profiling-based task scheduling framework, a task scheduling algorithm, and a set of performance models. Our task allocation scheme maps tasks onto the pipeline architecture and multiple threads of network processors in parallel, which incorporates the profiling context and global thread refinement. We evaluate our task scheduling algorithm by implementing representative network applications on the Intel IXP network processor. Experimental results demonstrate that our algorithm is able to schedule tasks in a balanced pipeline fashion and achieve the high throughput and data transmission rate. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An effective solution to model and apply planning domain knowledge for deliberation and action in probabilistic, agent-oriented control is presented. Specifically, the addition of a task structure planning component and supporting components to an agent-oriented architecture and agent implementation is described. For agent control in risky or uncertain environments, an approach and method of goal reduction to task plan sets and schedules of action is presented. Additionally, some issues related to component-wise, situation-dependent control of a task planning agent that schedules its tasks separately from planning them are motivated and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A computational study for the convergence acceleration of Euler and Navier-Stokes computations with upwind schemes has been conducted in a unified framework. It involves the flux-vector splitting algorithms due to Steger-Warming and Van Leer, the flux-difference splitting algorithms due to Roe and Osher and the hybrid algorithms, AUSM (Advection Upstream Splitting Method) and HUS (Hybrid Upwind Splitting). Implicit time integration with line Gauss-Seidel relaxation and multigrid are among the procedures which have been systematically investigated on an individual as well as cumulative basis. The upwind schemes have been tested in various implicit-explicit operator combinations such that the optimal among them can be determined based on extensive computations for two-dimensional flows in subsonic, transonic, supersonic and hypersonic flow regimes. In this study, the performance of these implicit time-integration procedures has been systematically compared with those corresponding to a multigrid accelerated explicit Runge-Kutta method. It has been demonstrated that a multigrid method employed in conjunction with an implicit time-integration scheme yields distinctly superior convergence as compared to those associated with either of the acceleration procedures provided that effective smoothers, which have been identified in this investigation, are prescribed in the implicit operator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seismic wave field numerical modeling and seismic migration imaging based on wave equation have become useful and absolutely necessarily tools for imaging of complex geological objects. An important task for numerical modeling is to deal with the matrix exponential approximation in wave field extrapolation. For small value size matrix exponential, we can approximate the square root operator in exponential using different splitting algorithms. Splitting algorithms are usually used on the order or the dimension of one-way wave equation to reduce the complexity of the question. In this paper, we achieve approximate equation of 2-D Helmholtz operator inversion using multi-way splitting operation. Analysis on Gauss integral and coefficient of optimized partial fraction show that dispersion may accumulate by splitting algorithms for steep dipping imaging. High-order symplectic Pade approximation may deal with this problem, However, approximation of square root operator in exponential using splitting algorithm cannot solve dispersion problem during one-way wave field migration imaging. We try to implement exact approximation through eigenfunction expansion in matrix. Fast Fourier Transformation (FFT) method is selected because of its lowest computation. An 8-order Laplace matrix splitting is performed to achieve a assemblage of small matrixes using FFT method. Along with the introduction of Lie group and symplectic method into seismic wave-field extrapolation, accurate approximation of matrix exponential based on Lie group and symplectic method becomes the hot research field. To solve matrix exponential approximation problem, the Second-kind Coordinates (SKC) method and Generalized Polar Decompositions (GPD) method of Lie group are of choice. SKC method utilizes generalized Strang-splitting algorithm. While GPD method utilizes polar-type splitting and symmetric polar-type splitting algorithm. Comparing to Pade approximation, these two methods are less in computation, but they can both assure the Lie group structure. We think SKC and GPD methods are prospective and attractive in research and practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of task assignment in a distributed system (such as a distributed Web server) in which task sizes are drawn from a heavy-tailed distribution. Many task assignment algorithms are based on the heuristic that balancing the load at the server hosts will result in optimal performance. We show this conventional wisdom is less true when the task size distribution is heavy-tailed (as is the case for Web file sizes). We introduce a new task assignment policy, called Size Interval Task Assignment with Variable Load (SITA-V). SITA-V purposely operates the server hosts at different loads, and directs smaller tasks to the lighter-loaded hosts. The result is that SITA-V provably decreases the mean task slowdown by significant factors (up to 1000 or more) where the more heavy-tailed the workload, the greater the improvement factor. We evaluate the tradeoff between improvement in slowdown and increase in waiting time in a system using SITA-V, and show conditions under which SITA-V represents a particularly appealing policy. We conclude with a discussion of the use of SITA-V in a distributed Web server, and show that it is attractive because it has a simple implementation which requires no communication from the server hosts back to the task router.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new algorithm is proposed for scheduling preemptible arbitrary-deadline sporadic task systems upon multiprocessor platforms, with interprocessor migration permitted. This algorithm is based on a task-splitting approach - while most tasks are entirely assigned to specific processors, a few tasks (fewer than the number of processors) may be split across two processors. This algorithm can be used for two distinct purposes: for actually scheduling specific sporadic task systems, and for feasibility analysis. Simulation- based evaluation indicates that this algorithm offers a significant improvement on the ability to schedule arbitrary- deadline sporadic task systems as compared to the contemporary state-of-art. With regard to feasibility analysis, the new algorithm is proved to offer superior performance guarantees in comparison to prior feasibility tests.