37 resultados para cloud environment

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed Denial-of-Service attack (DDoS) is a major threat for cloud environment. Traditional defending approaches cannot be easily applied in cloud security due to their relatively low efficiency, large storage, to name a few. In view of this challenge, a Confidence-Based Filtering method, named CBF, is investigated for cloud computing environment, in this paper. Concretely speaking, the method is deployed by two periods, i.e., non-attack period and attack period. More specially, legitimate packets are collected at non-attack period, for extracting attribute pairs to generate a nominal profile. With the nominal profile, the CBF method is promoted by calculating the score of a particular packet at attack period, to determine whether to discard it or not. At last, extensive simulations are conducted to evaluate the feasibility of the CBF method. The result shows that CBF has a high scoring speed, a small storage requirement and an acceptable filtering accuracy, making it suitable for real-time filtering in cloud environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is an emerging evolutionary computing model that provides highly scalable services over highspeed Internet on a pay-as-usage model. However, cloud-based solutions still have not been widely deployed in some sensitive areas, such as banking and healthcare. The lack of widespread development is related to users’ concern that their confidential data or privacy would leak out in the cloud’s outsourced environment. To address this problem, we propose a novel active data-centric framework to ultimately improve the transparency and accountability of actual usage of the users’ data in cloud. Our data-centric framework emphasizes “active” feature which packages the raw data with active properties that enforce data usage with active defending and protection capability. To achieve the active scheme, we devise the Triggerable Data File Structure (TDFS). Moreover, we employ the zero-knowledge proof scheme to verify the request’s identification without revealing any vital information. Our experimental outcomes demonstrate the efficiency, dependability, and scalability of our framework.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recent developments in sensor networks and cloud computing saw the emergence of a new platform called sensor-clouds. While the proposition of such a platform is to virtualise the management of physical sensor devices, we are seeing novel applications been created based on a new class of social sensors. Social sensors are effectively a human-device combination that sends torrent of data as a result of social interactions and social events. The data generated appear in different formats such as photographs, videos and short text messages. Unlike other sensor devices, social sensors operate on the control of individuals via their mobile devices such as a phone or a laptop. And unlike other sensors that generate data at a constant rate or format, social sensors generate data that are spurious and varied, often in response to events as individual as a dinner outing, or a news announcement of interests to the public. This collective presence of social data creates opportunities for novel applications never experienced before. This paper discusses such applications as a result of utilising social sensors within a sensor-cloud environment. Consequently, the associated research problems are also presented.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In the past few years, cloud computing has emerged as one of the most influential paradigms in the IT industry. As promising as it is, this paradigm brings forth many new challenges for data security because users have to outsource sensitive data on untrusted cloud servers for sharing. In this paper, to guarantee the confidentiality and security of data sharing in cloud environment, we propose a Flexible and Efficient Access Control Scheme (FEACS) based on Attribute-Based Encryption, which is suitable for fine-grained access control. Compared with existing state-of-the-art schemes, FEACS is more practical by following functions. First of all, considering the factor that the user membership may change frequently in cloud environment, FEACS has the capability of coping with dynamic membership efficiently. Secondly, full logic expression is supported to make the access policy described accurately and efficiently. Besides, we prove in the standard model that FEACS is secure based on the Decisional Bilinear Diffie-Hellman assumption. To evaluate the practicality of FEACS, we provide a detailed theoretical performance analysis and a simulation comparison with existing schemes. Both the theoretical analysis and the experimental results prove that our scheme is efficient and effective for cloud environment.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud and service computing has started to change the way research in science, in particular biology and medicine, is being carried out. Researchers that have taken advantage of this technology (making use of public and private cloud compute resources) can process large amounts of data (big data) and speed up discovery. However, this requires researchers to acquire a solid knowledge and skills in the development of sequential and high performance computing (HPC), and cloud development and deployment background. In response a technology exposing HPC applications as services through the development and deployment of a SaaS cloud, and its proof of concept in the form of implementation of a cloud environment, Uncinus, has been developed and implemented to allow researchers easy access to cloud computing resources. The new technology offers and Uncinus supports the development of applications as services and the sharing of compute resources to speed up applications' execution. Users access these cloud resources and services through web interfaces. Using the Uncinus platform, a bio-informatics workflow was executed on a private (HPC) cloud, server and public cloud (Amazon EC2) resources, performance results showing a 3 fold improvement compared to local resources' performance. Biology and medicine specialists with no programming and application deployment on clouds background could run the case study applications with ease.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud service selection in a multi-cloud computing environment is receiving more and more attentions. There is an abundance of emerging cloud service resources that makes it hard for users to select the better services for their applications in a changing multi-cloud environment, especially for online real time applications. To assist users to efficiently select their preferred cloud services, a cloud service selection model adopting the cloud service brokers is given, and based on this model, a dynamic cloud service selection strategy named DCS is put forward. In the process of selecting services, each cloud service broker manages some clustered cloud services, and performs the DCS strategy whose core is an adaptive learning mechanism that comprises the incentive, forgetting and degenerate functions. The mechanism is devised to dynamically optimize the cloud service selection and to return the best service result to the user. Correspondingly, a set of dynamic cloud service selection algorithms are presented in this paper to implement our mechanism. The results of the simulation experiments show that our strategy has better overall performance and efficiency in acquiring high quality service solutions at a lower computing cost than existing relevant approaches.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Due to the increasing energy consumption in cloud data centers, energy saving has become a vital objective in designing the underlying cloud infrastructures. A precise energy consumption model is the foundation of many energy-saving strategies. This paper focuses on exploring the energy consumption of virtual machines running various CPU-intensive activities in the cloud server using two types of models: traditional time-series models, such as ARMA and ES, and time-series segmentation models, such as sliding windows model and bottom-up model. We have built a cloud environment using OpenStack, and conducted extensive experiments to analyze and compare the prediction accuracy of these strategies. The results indicate that the performance of ES model is better than the ARMA model in predicting the energy consumption of known activities. When predicting the energy consumption of unknown activities, sliding windows segmentation model and bottom-up segmentation model can all have satisfactory performance but the former is slightly better than the later.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud computing is establishing itself as the latest computing paradigm in recent years. As doing science in the cloud is becoming a reality, scientists are now able to access public cloud centers and employ high-performance computing resources to run scientific applications. However, due to the dynamic nature of the cloud environment, the usability of scientific cloud workflow systems can be significantly deteriorated if without effective service quality assurance strategies. Specifically, workflow temporal verification as the major approach for workflow temporal QoS (Quality of Service) assurance plays a critical role in the on-time completion of large-scale scientific workflows. Great efforts have been dedicated to the area of workflow temporal verification in recent years and it is high time that we should define the key research issues for scientific cloud workflows in order to keep our research on the right track. In this paper, we systematically investigate this problem and present four key research issues based on the introduction of a generic temporal verification framework. Meanwhile, state-of-the-art solutions for each research issue and open challenges are also presented. Finally, SwinDeW-V, an ongoing research project on temporal verification as part of our SwinDeW-C cloud workflow system, is also demonstrated.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cloud computing as the latest computing paradigm has shown its promising future in business workflow systems facing massive concurrent user requests and complicated computing tasks. With the fast growth of cloud data centers, energy management especially energy monitoring and saving in cloud workflow systems has been attracting increasing attention. It is obvious that the energy for running a cloud workflow instance is mainly dependent on the energy for executing its workflow activities. However, existing energy management strategies mainly monitor the virtual machines instead of the workflow activities running on them, and hence it is difficult to directly monitor and optimize the energy consumption of cloud workflows. To address such an issue, in this paper, we propose an effective energy testing framework for cloud workflow activities. This framework can help to accurately test and analyze the baseline energy of physical and virtual machines in the cloud environment, and then obtain the energy consumption data of cloud workflow activities. Based on these data, we can further produce the energy consumption model and apply energy prediction strategies. Our experiments are conducted in an OpenStack based cloud computing environment. The effectiveness of our framework has been successfully verified through a detailed case study and a set of energy modelling and prediction experiments based on representative time-series models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Scientific workflow is a complicated data intensive application. How to achieve an effective data placement schema in hybrid cloud environment has become a crucial issue nowadays, especially with the new challenges brought by the security issues. Traditional data placement strategies usually adopt load balancing-based partition model to allocate datasets. Although these data placement schemas can have good performance in load balancing, their data transfer time may not be optimal. In contrast to traditional strategies, this paper focuses on the hybrid cloud environment and proposes a data dependency destruction-based partition model to achieve the minimal data dependency destruction partition. In addition, it presents a novel datacenter-oriented data placement strategy. This strategy allocates high dependency datasets to one datacenter according to the new partition model and thus significantly reduces data transfer time between datacenters. Experimental results show that the proposed strategy can effectively reduce data transfer time during workflow's execution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent developments in sensor networks and cloud computing saw the emergence of a new platform called sensor-clouds. While the proposition of such a platform is to virtualise the management of physical sensor devices, we foresee novel applications being created based on a new class of social sensors. Social sensors are effectively a human-device combination that sends torrents of data as a result of social interactions. The data generated appear in different formats such as photographs, videos, or short texts, etc. Unlike other sensor devices, social sensors operate on the control of individuals via their mobile devices like smart phones, tablets or laptops. Further, they do not generate data at a constant rate or format like other sensors do. Instead, data from social sensors are spurious and varied, often in response to social events, or a news announcement of interests to the public. This collective presence of social data creates opportunities for novel applications never experienced before. This paper discusses three such applications utilising social sensors within a sensor-cloud environment. Consequently, the associated research problems are also presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the advent of Cloud Computing, IDS as a service (IDSaaS) has been proposed as an alternative to protect a network (e.g., financial organization) from a wide range of network attacks by offloading the expensive operations such as the process of signature matching to the cloud. The IDSaaS can be roughly classified into two types: signature-based detection and anomaly-based detection. During the packet inspection, no party wants to disclose their own data especially sensitive information to others, even to the cloud provider, for privacy concerns. However, current solutions of IDSaaS have not much discussed this issue. In this work, focus on the signature-based IDSaaS, we begin by designing a promising privacy-preserving intrusion detection mechanism, the main feature of which is that the process of signature matching does not reveal any specific content of network packets by means of a fingerprint-based comparison. We further conduct a study to evaluate this mechanism under a cloud scenario and identify several open problems and issues for designing such a privacy-preserving mechanism for IDSaaS in a practical environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mobile cloud computing has been involved as a key enabling technology to overcome the physical limitations of mobile devices towards scalable and flexible mobile services. In the mobile cloud environment, searchable encryption, which enables directly search over encrypted data, is a key technique to maintain both the privacy and usability of outsourced data in cloud. On addressing the issue, many research efforts resolve to using the searchable symmetric encryption (SSE) and searchable public-key encryption (SPE). In this paper, we improve the existing works by developing a more practical searchable encryption technique, which can support dynamic updating operations in the mobile cloud applications. Specifically, we make our efforts on taking the advantages of both SSE and SPE techniques, and propose PSU, a Personalized Search scheme over encrypted data with efficient and secure Updates in mobile cloud. By giving thorough security analysis, we demonstrate that PSU can achieve a high security level. Using extensive experiments in a realworld mobile environment, we show that PUS is more efficient compared with the existing proposals.