132 resultados para Cloud, TuCSoN, CaaS, Middleware, Coordination


Relevância:

20.00% 20.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question has become significant and challenging with the availability of a wide variety of Virtual Machines (VMs) with different performance capabilities in the cloud. The above question is addressed by proposing a six step benchmarking methodology in which a user provides a set of four weights that indicate how important each of the following groups: memory, processor, computation and storage are to the application that needs to be executed on the cloud. The weights along with cloud benchmarking data are used to generate a ranking of VMs that can maximise performance of the application. The rankings are validated through an empirical analysis using two case study applications, the first is a financial risk application and the second is a molecular dynamics simulation, which are both representative of workloads that can benefit from execution on the cloud. Both case studies validate the feasibility of the methodology and highlight that maximum performance can be achieved on the cloud by selecting the top ranked VMs produced by the methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.

In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bag of Distributed Tasks (BoDT) can benefit from decentralised execution on the Cloud. However, there is a trade-off between the performance that can be achieved by employing a large number of Cloud VMs for the tasks and the monetary constraints that are often placed by a user. The research reported in this paper is motivated towards investigating this trade-off so that an optimal plan for deploying BoDT applications on the cloud can be generated. A heuristic algorithm, which considers the user's preference of performance and cost is proposed and implemented. The feasibility of the algorithm is demonstrated by generating execution plans for a sample application. The key result is that the algorithm generates optimal execution plans for the application over 91% of the time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When orchestrating Web service workflows, the geographical placement of the orchestration engine (s) can greatly affect workflow performance. Data may have to be transferred across long geographical distances, which in turn increases execution time and degrades the overall performance of a workflow. In this paper, we present a framework that, given a DAG-based workflow specification, computes the optimal Amazon EC2 cloud regions to deploy the orchestration engines and execute a workflow. The framework incorporates a constraint model that solves the workflow deployment problem, which is generated using an automated constraint modelling system. The feasibility of the framework is evaluated by executing different sample workflows representative of scientific workloads. The experimental results indicate that the framework reduces the workflow execution time and provides a speed up of 1.3x-2.5x over centralised approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bail-in is quickly becoming a predominant approach to banking resolution. The EU Bank Recovery Resolution Directive and the US Federal Deposit Insurance Corporation’s single point of entry strategy envisage creditors’ recapitalisations
to resolve a failing financial institution. However, this legislation focuses on the domestic aspects of bail-in, leaving the question of how it is applied
to a cross-border banking group open. Cross-border banking resolution has been historically subject to coordination failures, which have resulted in disorderly resolutions with dangerous systemic effects. The goal of this article is to assess whether bail-in is subject to the same coordination problems that affect other resolution tools, and to discuss the logic of international legal cooperation in bail-in policies. We demonstrate that, in spite of the evident benefit in terms of fiscal sustainability, bail-in suffers from complex coordination problems which, if not addressed, might lead to regulatory arbitrage and lengthy court battles, and, ultimately, may disrupt resolutions. We argue that only a binding legal regime can address those problems. In doing so, we discuss the recent Financial Stability
Board’s proposal on cross-border recognition of resolution action, and the role of international law in promoting cooperation in banking resolution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years much attention has been given to systemic risk and maintaining financial stability. Much of the focus, rightly, has been on market failures and the role of regulation in addressing them. This article looks at the role of domestic policies and government actions as sources of global instability. The global financial system is built upon global markets controlled by national financial and macroeconomic policies. In this context, regulatory asymmetries, diverging policy preferences, and government failures add a further dimension to global systemic risk not present at the national level.
Systemic risk is a result of the interplay between two independent variables: an underlying trigger event, in this analysis a domestic policy measure, and a transmission channel. The solution to systemic risk requires tackling one of these variables. In a domestic setting, the centralization of regulatory power into one single authority makes it easier to balance the delicate equilibrium between enhancing efficiency and reducing instability. However, in a global financial system in which national financial policies serve to maximize economic welfare, regulators will be confronted with difficult policy and legal tradeoffs.
We investigate the role that financial regulation plays in addressing domestic policy failures and in controlling the danger of global financial interdependence. To do so we analyse global financial interconnectedness, and explain its role in transmitting instability; we investigate the political economy dynamics at the origin of regulatory asymmetries and government failures; and we discuss the limits of regulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud data centres are implemented as large-scale clusters with demanding requirements for service performance, availability and cost of operation. As a result of scale and complexity, data centres typically exhibit large numbers of system anomalies resulting from operator error, resource over/under provisioning, hardware or software failures and security issus anomalies are inherently difficult to identify and resolve promptly via human inspection. Therefore, it is vital in a cloud system to have automatic system monitoring that detects potential anomalies and identifies their source. In this paper we present a lightweight anomaly detection tool for Cloud data centres which combines extended log analysis and rigorous correlation of system metrics, implemented by an efficient correlation algorithm which does not require training or complex infrastructure set up. The LADT algorithm is based on the premise that there is a strong correlation between node level and VM level metrics in a cloud system. This correlation will drop significantly in the event of any performance anomaly at the node-level and a continuous drop in the correlation can indicate the presence of a true anomaly in the node. The log analysis of LADT assists in determining whether the correlation drop could be caused by naturally occurring cloud management activity such as VM migration, creation, suspension, termination or resizing. In this way, any potential anomaly alerts are reasoned about to prevent false positives that could be caused by the cloud operator’s activity. We demonstrate LADT with log analysis in a Cloud environment to show how the log analysis is combined with the correlation of systems metrics to achieve accurate anomaly detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scheduling jobs with deadlines, each of which defines the latest time that a job must be completed, can be challenging on the cloud due to incurred costs and unpredictable performance. This problem is further complicated when there is not enough information to effectively schedule a job such that its deadline is satisfied, and the cost is minimised. In this paper, we present an approach to schedule jobs, whose performance are unknown before execution, with deadlines on the cloud. By performing a sampling phase to collect the necessary information about those jobs, our approach delivers the scheduling decision within 10% cost and 16% violation rate when compared to the ideal setting, which has complete knowledge about each of the jobs from the beginning. It is noted that our proposed algorithm outperforms existing approaches, which use a fixed amount of resources by reducing the violation cost by at least two times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the 21st century, information has become the most valuable resource that is available to modern societies. Thus, great efforts have been made to develop new information processing and storage techniques. Chemistry can offer a wide variety of computing paradigms that are closely related to the natural processes found in living organisms (e.g., in the nervous systems of animals). Moreover, these phenomena cannot be reproduced easily by solely using silicon-based technology. Other great advantages of molecular-scale systems include their simplicity and the diversity of interactions that occur among them. Thus, devices constructed using chemical entities may be programmed to deal with different information carriers (photons, electrons, ions, and molecules), possibly surpassing the capabilities of classic electronic circuits.