146 resultados para cloud computing accountability


Relevância:

90.00% 90.00%

Publicador:

Resumo:

RFID and Cloud computing are widely used in the IoT (Internet of Things). However, there are few research works which combine RFID ownership transfer schemes with Cloud computing. Subsequently, this paper points out the weaknesses in two protocols proposed by Xie et al. (2013) [3] and Doss et al. (2013) [9]. To solve the security issues of these protocols, we present a provably secure RFID ownership transfer protocol which achieves the security and privacy requirements for cloud-based applications. To be more specific, the communication channels among the tags, mobile readers and the cloud database are insecure. Besides, an encrypted hash table is used in the cloud database. Next, the presented protocol not only meets backward untraceability and the proposed strong forward untraceability, but also resists against replay attacks, tracing attacks, inner reader malicious impersonation attacks, tag impersonation attacks and desynchronization attacks. The comparisons of security and performance properties show that the proposed protocol has more security, higher efficiency and better scalability compared with other schemes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Scientific workflow is a complicated data intensive application. How to achieve an effective data placement schema in hybrid cloud environment has become a crucial issue nowadays, especially with the new challenges brought by the security issues. Traditional data placement strategies usually adopt load balancing-based partition model to allocate datasets. Although these data placement schemas can have good performance in load balancing, their data transfer time may not be optimal. In contrast to traditional strategies, this paper focuses on the hybrid cloud environment and proposes a data dependency destruction-based partition model to achieve the minimal data dependency destruction partition. In addition, it presents a novel datacenter-oriented data placement strategy. This strategy allocates high dependency datasets to one datacenter according to the new partition model and thus significantly reduces data transfer time between datacenters. Experimental results show that the proposed strategy can effectively reduce data transfer time during workflow's execution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The linkage between healthcare service and cloud computing techniques has drawn much attention lately. Up to the present, most works focus on IT system migration and the management of distributed healthcare data rather than taking advantage of information hidden in the data. In this paper, we propose to explore healthcare data via cloud-based healthcare data mining services. Specifically, we propose a cloud-based healthcare data mining framework for healthcare data mining service development. Under such framework, we further develop a cloud-based healthcare data mining service to predict patients future length of stay in hospital.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The crucial role of networking in Cloud computing calls for federated management of both computing and networkin resources for end-To-end service provisioning. Application of the Service-Oriented Architecture (SOA) in both Cloud computing an networking enables a convergence of network and Cloud service provisioning. One of the key challenges to high performanc converged network-Cloud service provisioning lies in composition of network and Cloud services with end-To-end performanc guarantee. In this paper, we propose a QoS-Aware service composition approach to tackling this challenging issue. We first present system model for network-Cloud service composition and formulate the service composition problem as a variant of Multi-Constraine Optimal Path (MCOP) problem. We then propose an approximation algorithm to solve the problem and give theoretical analysis o properties of the algorithm to show its effectiveness and efficiency for QoS-Aware network-Cloud service composition. Performanc of the proposed algorithm is evaluated through extensive experiments and the obtained results indicate that the proposed metho achieves better performance in service composition than the best current MCOP approaches Service (QoS).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications without infrastructure investment, where large application data sets can be stored in the cloud. Based on the pay-as-you-go model, storage strategies and benchmarking approaches have been developed for cost-effectively storing large volume of generated application data sets in the cloud. However, they are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this paper, toward achieving the minimum cost benchmark, we propose a novel highly cost-effective and practical storage strategy that can automatically decide whether a generated data set should be stored or not at runtime in the cloud. The main focus of this strategy is the local-optimization for the tradeoff between computation and storage, while secondarily also taking users' (optional) preferences on storage into consideration. Both theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon's cost model show that the cost-effectiveness of our strategy is close to or even the same as the minimum cost benchmark, and the efficiency is very high for practical runtime utilization in the cloud.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many scientific workflows are data intensive where large volumes of intermediate data are generated during their execution. Some valuable intermediate data need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science in the cloud has become popular nowadays, more intermediate data can be stored in scientific cloud workflows based on a pay-for-use model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenance in scientific workflows. With the IDG, deleted intermediate data can be regenerated, and as such we develop a novel intermediate data storage strategy that can reduce the cost of scientific cloud workflow systems by automatically storing appropriate intermediate data sets with one cloud service provider. The strategy has significant research merits, i.e. it achieves a cost-effective trade-off of computation cost and storage cost and is not strongly impacted by the forecasting inaccuracy of data sets' usages. Meanwhile, the strategy also takes the users' tolerance of data accessing delay into consideration. We utilize Amazon's cost model and apply the strategy to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that our strategy can reduce the overall cost of scientific cloud workflow execution significantly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cloud computing systems and services have become major targets for cyberattackers. To provide strong protection of cloud platforms, infrastructure, hosted applications, and data stored in the cloud, we need to address the security issue from a range of perspectives-from secure data and application outsourcing, to anonymous communication, to secure multiparty computation. This special issue on cloud security aims to address the importance of protecting and securing cloud platforms, infrastructures, hosted applications, and data storage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the emergence of cloud computing, the need for flexible and detailed publication and selection of services that expose cloud resources is greatly stressed. While dynamic attributes have improved the publication and selection of resources in distributed systems, the use of dynamic attributes is yet to be tried in Web services: a key element that makes cloud computing possible. We propose a new approach to Web service publication and selection using dynamic attributes shown in Web service WSDL documents, the most commonly accessed and used elements of Web services.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a new framework allowing cloud services to be stateful, cloud resource state and characteristics to be published, and brokering for easy cloud resource discovery and selection to be offered. Using the framework, new technology developed significantly simplifies the discovery, selection and use of clusters on the Internet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When it comes to grid and cloud computing, there is a lot of debate over their relations to each other. A common feature is that grids and clouds are attempts at utility computing. However, how they realize utility computing is different. The purpose of this paper is to characterize and present a side by side comparison of grid and cloud computing and present what open areas of research exist.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As an interesting application on cloud computing, content-based image retrieval (CBIR) has attracted a lot of attention, but the focus of previous research work was mainly on improving the retrieval performance rather than addressing security issues such as copyrights and user privacy. With an increase of security attacks in the computer networks, these security issues become critical for CBIR systems. In this paper, we propose a novel two-party watermarking protocol that can resolve the issues regarding user rights and privacy. Unlike the previously published protocols, our protocol does not require the existence of a trusted party. It exhibits three useful features: security against partial watermark removal, security in watermark verification and non-repudiation. In addition, we report an empirical research of CBIR with the security mechanism. The experimental results show that the proposed protocol is practicable and the retrieval performance will not be affected by watermarking query images.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays cloud computing has become a major trend that enterprises and research organizations are pursuing with increasing zest. A potentially important application area for clouds is data analytics. In our previous publication, we introduced a novel cloud infrastructure, the CloudMiner, which facilitates data mining on massive scientific data. By providing a cloud platform which hosts data mining cloud services following the Software as a Service (SaaS) paradigm, CloudMiner offers the capability for realizing cloud-based data mining tasks upon traditional distributed databases and other dataset types. However, little attention has been paid to the issue of data stream management on the cloud so far. We have noticed the fact that some features of the cloud meet very well the requirements of data stream management. Consequently, we developed an innovative software framework, called the StreamMiner, which is introduced in this paper. It serves as an extension to the CloudMiner for facilitating, in particular, real-world data stream management and analysis using cloud services. In addition, we also introduce our tentative implementation of the framework. Finally, we present and discuss the first experimental performance results achieved with the first StreamMiner prototype.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing amount of data collected in the fields of physics and bio-informatics allows researchers to build realistic, and therefore accurate, models/simulations and gain a deeper understanding of complex systems. This analysis is often at the cost of greatly increased processing requirements. Cloud computing, which provides on demand resources, can offset increased analysis requirements. While beneficial to researchers, adaption of clouds has been slow due to network and performance uncertainties. We compare the performance of cloud computers to clusters to make clear the advantages and limitations of clouds. Focus has been put on understanding how virtualization and the underlying network effects performance of High Performance Computing (HPC) applications. Collected results indicate that performance comparable to high performance clusters is achievable on cloud computers depending on the type of application run.