12 resultados para WORKFLOW SYSTEMS

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

On-time completion is an important temporal QoS (Quality of Service) dimension and one of the fundamental requirements for high-confidence workflow systems. In recent years, a workflow temporal verification framework, which generally consists of temporal constraint setting, temporal checkpoint selection, temporal verification, and temporal violation handling, has been the major approach for the high temporal QoS assurance of workflow systems. Among them, effective temporal checkpoint selection, which aims to timely detect intermediate temporal violations along workflow execution plays a critical role. Therefore, temporal checkpoint selection has been a major topic and has attracted significant efforts. In this paper, we will present an overview of work-flow temporal checkpoint selection for temporal verification. Specifically, we will first introduce the throughput based and response-time based temporal consistency models for business and scientific cloud workflow systems, respectively. Then the corresponding benchmarking checkpoint selection strategies that satisfy the property of “necessity and sufficiency” are presented. We also provide experimental results to demonstrate the effectiveness of our checkpoint selection strategies, and finally points out some possible future issues in this research area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientific processes are usually time constrained with overall deadlines and local milestones. In scientific workflow systems, due to the dynamic nature of the underlying computing infrastructures such as grid and cloud, execution delays often take place and result in a large number of temporal violations. Temporal violation handling is to execute violation handling strategies which can compensate for the occurring time deficit but would impose some additional cost. Generally speaking, the two fundamental requirements for delivering satisfactory temporal QoS in scientific workflow systems are temporal conformance and cost effectiveness. Every task for workflow temporal management incurs some cost. Take a single temporal violation handling as an example, its cost can be primarily referred to monetary costs and time overheads of violation handling strategies which are normally nontrivial in scientific workflow systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Market-oriented reverse auction is an efficient and cost-effective method for resource allocation in cloud workflow systems since it can dynamically allocate resources depending on the supply-demand relationship of the cloud market. However, during the auction the price of cloud resource is usually fixed, and the current resource allocation mechanisms cannot adapt to the changeable market properly which results in the low efficiency of resource utilization. To address such a problem, a dynamic pricing reverse auction-based resource allocation mechanism is proposed. During the auction, resource providers can change prices according to the trading situation so that our novel mechanism can increase the chances of making a deal and improve efficiency of resource utilization. In addition, resource providers can improve their competitiveness in the market by lowering prices, and thus users can obtain cheaper resources in shorter time which would decrease monetary cost and completion time for workflow execution. Experiments with different situations and problem sizes are conducted for dynamic pricing-based allocation mechanism (DPAM) on resource utilization and the measurement of Time∗Cost (TC). The results show that our DPAM can outperform its representative in resource utilization, monetary cost, and completion time and also obtain the optimal price reduction rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many scientific workflows are data intensive where large volumes of intermediate data are generated during their execution. Some valuable intermediate data need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science in the cloud has become popular nowadays, more intermediate data can be stored in scientific cloud workflows based on a pay-for-use model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenance in scientific workflows. With the IDG, deleted intermediate data can be regenerated, and as such we develop a novel intermediate data storage strategy that can reduce the cost of scientific cloud workflow systems by automatically storing appropriate intermediate data sets with one cloud service provider. The strategy has significant research merits, i.e. it achieves a cost-effective trade-off of computation cost and storage cost and is not strongly impacted by the forecasting inaccuracy of data sets' usages. Meanwhile, the strategy also takes the users' tolerance of data accessing delay into consideration. We utilize Amazon's cost model and apply the strategy to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that our strategy can reduce the overall cost of scientific cloud workflow execution significantly.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud computing as the latest computing paradigm has shown its promising future in business workflow systems facing massive concurrent user requests and complicated computing tasks. With the fast growth of cloud data centers, energy management especially energy monitoring and saving in cloud workflow systems has been attracting increasing attention. It is obvious that the energy for running a cloud workflow instance is mainly dependent on the energy for executing its workflow activities. However, existing energy management strategies mainly monitor the virtual machines instead of the workflow activities running on them, and hence it is difficult to directly monitor and optimize the energy consumption of cloud workflows. To address such an issue, in this paper, we propose an effective energy testing framework for cloud workflow activities. This framework can help to accurately test and analyze the baseline energy of physical and virtual machines in the cloud environment, and then obtain the energy consumption data of cloud workflow activities. Based on these data, we can further produce the energy consumption model and apply energy prediction strategies. Our experiments are conducted in an OpenStack based cloud computing environment. The effectiveness of our framework has been successfully verified through a detailed case study and a set of energy modelling and prediction experiments based on representative time-series models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The provision of Human Resource (HR), especially payroll, is a core function in every organization. Previously, providers of HR/payroll have offered their services to their clients via conventional modes of communication, such as telephones, facsimile, and courier services. In recent years, with the advent of the Internet and the emergence of web-based electronic commerce, there has been a rise in the adoption of web-based technology and information
systems by service providers, thereby enabling them to interact with their clients through this medium. This development necessitates the use of web-based user interfaces as workspaces between the HR/payroll providers and their clients, and thus, raises certain concerns that determine the effectiveness of web-based workflow systems. These concerns, related to the use of web interfaces, form the basis of the patterns discussed in this paper

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud computing is establishing itself as the latest computing paradigm in recent years. As doing science in the cloud is becoming a reality, scientists are now able to access public cloud centers and employ high-performance computing resources to run scientific applications. However, due to the dynamic nature of the cloud environment, the usability of scientific cloud workflow systems can be significantly deteriorated if without effective service quality assurance strategies. Specifically, workflow temporal verification as the major approach for workflow temporal QoS (Quality of Service) assurance plays a critical role in the on-time completion of large-scale scientific workflows. Great efforts have been dedicated to the area of workflow temporal verification in recent years and it is high time that we should define the key research issues for scientific cloud workflows in order to keep our research on the right track. In this paper, we systematically investigate this problem and present four key research issues based on the introduction of a generic temporal verification framework. Meanwhile, state-of-the-art solutions for each research issue and open challenges are also presented. Finally, SwinDeW-V, an ongoing research project on temporal verification as part of our SwinDeW-C cloud workflow system, is also demonstrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications without infrastructure investment, where large application data sets can be stored in the cloud. Based on the pay-as-you-go model, storage strategies and benchmarking approaches have been developed for cost-effectively storing large volume of generated application data sets in the cloud. However, they are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this paper, toward achieving the minimum cost benchmark, we propose a novel highly cost-effective and practical storage strategy that can automatically decide whether a generated data set should be stored or not at runtime in the cloud. The main focus of this strategy is the local-optimization for the tradeoff between computation and storage, while secondarily also taking users' (optional) preferences on storage into consideration. Both theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon's cost model show that the cost-effectiveness of our strategy is close to or even the same as the minimum cost benchmark, and the efficiency is very high for practical runtime utilization in the cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Growing evidence shows that in obtaining high performance, a well-managed time-constrained workflow scheduling is needed. Efficient workflow scheduling is critical for achieving high performance especially in heterogeneous computing system. However, it is a great challenge to improve performance and to optimize several objectives simultaneously. We propose a workflow scheduling algorithm that minimizes the makespan of the workflow application modeled by a Directed Acyclic Graph (DAG). The new proposed scheduling algorithm is named Multi Dependency Joint (MDJ) Algorithm. The performance of MDJ is compared with existing algorithms such as, Highest Level First with Estimated Time (HLFET), Modified Critical Path (MCP) and Earliest Time First (ETF). As a result, the experiments show that our proposed MDJ algorithm outperforms HLEFT, MCP, and EFT with a 7% lower overall completion time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Workflow applications require workflow processing in which workflow tasks are processed based on their dependencies. With the emergency of complex distributed systems such as grids and clouds, efficient workflow scheduling (WFS) algorithms have become the core components of the workflow management systems (WfMS). Thus, WFS that allocates each task in the workflow to a relevant resource with the aim of improving system performance and end user satisfaction is fundamentally important. In this paper, we propose a new workflow scheduling algorithm called Layered Workflow Scheduling Algorithm (LWFS) for scheduling workflow applications. We studied the efficacy of the LWFS scheduling experimentally and compared its performance with approaches including Improved Critical Path using Descendant Prediction (ICPDP), Highest Level First with Estimated Time (HLFET), Modified Critical Path (MCP) and Earliest Time First (ETF). The results of the experiments show that the proposed approach outperforms other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Workflow temporal verification is conducted to guarantee on-time completion, which is one of the most important QoS (Quality of Service) dimensions for business processes running in the cloud. However, as today's business systems often need to handle a large number of concurrent customer requests, conventional response-time based process monitoring strategies conducted in a one-by-one fashion cannot be applied efficiently to a large batch of parallel processes because of significant time overhead. Similar situations may also exist in software companies where multiple software projects are carried out at the same time by software developers. To address such a problem, based on a novel runtime throughput consistency model, this paper proposes a QoS-aware throughput based checkpoint selection strategy, which can dynamically select a small number of checkpoints along the system timeline to facilitate the temporal verification of throughput constraints and achieve the target on-time completion rate. Experimental results demonstrate that our strategy can achieve the best efficiency and effectiveness compared with the state-of-the-art as and other representative response-time based checkpoint selection strategies.