14 resultados para user-controlled cloud computing
em WestminsterResearch - UK
Resumo:
The potential of cloud computing is gaining significant interest in Modeling & Simulation (M&S). The underlying concept of using computing power as a utility is very attractive to users that can access state-of-the-art hardware and software without capital investment. Moreover, the cloud computing characteristics of rapid elasticity and the ability to scale up or down according to workload make it very attractive to numerous applications including M&S. Research and development work typically focuses on the implementation of cloud-based systems supporting M&S as a Service (MSaaS). Such systems are typically composed of a supply chain of technology services. How is the payment collected from the end-user and distributed to the stakeholders in the supply chain? We discuss the business aspects of developing a cloud platform for various M&S applications. Business models from the perspectives of the stakeholders involved in providing and using MSaaS and cloud computing are investigated and presented.
Resumo:
In this paper we present a concept of an agent-based strategy to allocate services on a Cloud system without overloading nodes and maintaining the system stability with minimum cost. To provide a base for our research we specify an abstract model of cloud resources utilization, including multiple types of resources as well as considerations for the service migration costs. We also present an early version of simulation environment and a prototype of agent-based load balancer implemented in functional language Scala and Akka framework.
Resumo:
This paper introduces a strategy to allocate services on a cloud system without overloading the nodes and maintaining the system stability with minimum cost. We specify an abstract model of cloud resources utilization, including multiple types of resources as well as considerations for the service migration costs. A prototype meta-heuristic load balancer is demonstrated and experimental results are presented and discussed. We also propose a novel genetic algorithm, where population is seeded with the outputs of other meta-heuristic algorithms.
Resumo:
This paper describes the impact of cloud computing and the use of GPUs on the performance of Autodock and Gromacs respectively. Cloud computing was applicable to reducing the ‘‘tail’’ seen in running Autodock on desktop grids and the GPU version of Gromacs showed significant improvement over the CPU version. A large (200,000 compounds) library of small molecules, seven sialic acid analogues of the putative substrate and 8000 sugar molecules were converted into pdbqt format and used to interrogate the Trichomonas vaginalis neuraminidase using Autodock Vina. Good binding energy was noted for some of the small molecules (~-9 kcal/mol), but the sugars bound with affinity of less than -7.6 kcal/mol. The screening of the sugar library resulted in a ‘‘top hit’’ with a-2,3-sialyllacto-N-fucopentaose III, a derivative of the sialyl Lewisx structure and a known substrate of the enzyme. Indeed in the top 100 hits 8 were related to this structure. A comparison of Autodock Vina and Autodock 4.2 was made for the high affinity small molecules and in some cases the results were superimposable whereas in others, the match was less good. The validation of this work will require extensive ‘‘wet lab’’ work to determine the utility of the workflow in the prediction of potential enzyme inhibitors.
Resumo:
The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants - insulated from the minutiae of hardware maintenance - rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments.
Resumo:
The broad capabilities of current mobile devices have paved the way for Mobile Crowd Sensing (MCS) applications. The success of this emerging paradigm strongly depends on the quality of received data which, in turn, is contingent to mass user participation; the broader the participation, the more useful these systems become. However, there is an ongoing trend that tries to integrate MCS applications with emerging computing paradigms such as cloud computing. The intuition is that such a transition can significantly improve the overall efficiency while at the same time it offers stronger security and privacy-preserving mechanisms for the end-user. In this position paper, we dwell on the underpinnings of incorporating cloud computing techniques to facilitate the vast amount of data collected in MCS applications. That is, we present a list of core system, security and privacy requirements that must be met if such a transition is to be successful. To this end, we first address several competing challenges not previously considered in the literature such as the scarce energy resources of battery-powered mobile devices as well as their limited computational resources that they often prevent the use of computationally heavy cryptographic operations and thus offering limited security services to the end-user. Finally, we present a use case scenario as a comprehensive example. Based on our findings, we posit open issues and challenges, and discuss possible ways to address them, so that security and privacy do not hinder the migration of MCS systems to the cloud.
Resumo:
Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.
Resumo:
Simulating the efficiency of business processes could reveal crucial bottlenecks for manufacturing companies and could lead to significant optimizations resulting in decreased time to market, more efficient resource utilization, and larger profit. While such business optimization software is widely utilized by larger companies, SMEs typically do not have the required expertise and resources to efficiently exploit these advantages. The aim of this work is to explore how simulation software vendors and consultancies can extend their portfolio to SMEs by providing business process optimization based on a cloud computing platform. By executing simulation runs on the cloud, software vendors and associated business consultancies can get access to large computing power and data storage capacity on demand, run large simulation scenarios on behalf of their clients, analyze simulation results, and advise their clients regarding process optimization. The solution is mutually beneficial for both vendor/consultant and the end-user SME. End-user companies will only pay for the service without requiring large upfront costs for software licenses and expensive hardware. Software vendors can extend their business towards the SME market with potentially huge benefits.
Resumo:
Physical location of data in cloud storage is a problem that gains a lot of attention not only from the actual cloud providers but also from the end users' who lately raise many concerns regarding the privacy of their data. It is a common practice that cloud service providers create replicate users' data across multiple physical locations. However, moving data in different countries means that basically the access rights are transferred based on the local laws of the corresponding country. In other words, when a cloud service provider stores users' data in a different country then the transferred data is subject to the data protection laws of the country where the servers are located. In this paper, we propose LocLess, a protocol which is based on a symmetric searchable encryption scheme for protecting users' data from unauthorized access even if the data is transferred to different locations. The idea behind LocLess is that "Once data is placed on the cloud in an unencrypted form or encrypted with a key that is known to the cloud service provider, data privacy becomes an illusion". Hence, the proposed solution is solely based on encrypting data with a key that is only known to the data owner.
Resumo:
Although originally an academic and research product, the WS-PGRADE/gUSE framework is increasingly applied by commercial institutions too. Within the SCI-BUS project, several commercial gateways have been developed by various companies. WS-PGRADE/gUSE is also intensively used within another European research project, CloudSME (Cloud-based Simulation Platform for Manufacturing and Engineering). This chapter provides an overview and de-scribes in detail some commercial WS-PGRADE/gUSE based gateway implemen-tations. Two representative case studies from the SCI-BUS project, the Build and Test portal and the eDOX Archiver Gateway are introduced. An overview of WS-PGRADE/gUSE based gateways for running simulation applications in the cloud within the CloudSME project is also provided.
Resumo:
This paper presents the Accurate Google Cloud Simulator (AGOCS) – a novel high-fidelity Cloud workload simulator based on parsing real workload traces, which can be conveniently used on a desktop machine for day-to-day research. Our simulation is based on real-world workload traces from a Google Cluster with 12.5K nodes, over a period of a calendar month. The framework is able to reveal very precise and detailed parameters of the executed jobs, tasks and nodes as well as to provide actual resource usage statistics. The system has been implemented in Scala language with focus on parallel execution and an easy-to-extend design concept. The paper presents the detailed structural framework for AGOCS and discusses our main design decisions, whilst also suggesting alternative and possibly performance enhancing future approaches. The framework is available via the Open Source GitHub repository.
Resumo:
Cloud storage has rapidly become a cornerstone of many businesses and has moved from an early adopters stage to an early majority, where we typically see explosive deployments. As companies rush to join the cloud revolution, it has become vital to create the necessary tools that will effectively protect users' data from unauthorized access. Nevertheless, sharing data between multiple users' under the same domain in a secure and efficient way is not trivial. In this paper, we propose Sharing in the Rain – a protocol that allows cloud users' to securely share their data based on predefined policies. The proposed protocol is based on Attribute-Based Encryption (ABE) and allows users' to encrypt data based on certain policies and attributes. Moreover, we use a Key-Policy Attribute-Based technique through which access revocation is optimized. More precisely, we show how to securely and efficiently remove access to a file, for a certain user that is misbehaving or is no longer part of a user group, without having to decrypt and re-encrypt the original data with a new key or a new policy.
Resumo:
Energy-efficient computing remains a critical challenge across the wide range of future data-processing engines — from ultra-low-power embedded systems to servers, mainframes, and supercomputers. In addition, the advent of cloud and mobile computing as well as the explosion of IoT technologies have created new research challenges in the already complex, multidimensional space of modern and future computer systems. These new research challenges led to the establishment of the IEEE Rebooting Computing Initiative, which specifically addresses novel low-power solutions and technologies as one of the main areas of concern.With this in mind, we thought it timely to survey the state of the art of energy-efficient computing.