174 resultados para Cloud OS, cloud operating system, cloud computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the primary issues associated with the efficient and effective utilization of distributed computing is resource management and scheduling. As distributed computing resource failure is a common occurrence, the issue of deploying support for integrated scheduling and fault-tolerant approaches becomes paramount importance. To this end, we propose a fault-tolerant dynamic scheduling policy that loosely couples dynamic job scheduling with job replication scheme such that jobs are efficiently and reliably executed. The novelty of the proposed algorithm is that it uses passive replication approach under high system load and active replication approach under low system loads. The switch between these two replication methods is also done dynamically and transparently. Performance evaluation of the proposed fault-tolerant scheduler and a comparison with similar fault-tolerant scheduling policy is presented and shown that the proposed policy performs better than the existing approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deployment of applications and scientific workflows that require resources from multiple distributed platforms are fuelling the federation of autonomous clouds to create cyber infrastructure environments. As the scope of federated cloud computing enlarges to ubiquitous and pervasive computing, there will be a need to assess and maintain the trustworthiness of the cloud computing entities. In this paper, we present a fully distributed framework that enable interested parties determine the trustworthiness of federated cloud computing entities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing amount of data collected in the fields of physics and bio-informatics allows researchers to build realistic, and therefore accurate, models/simulations and gain a deeper understanding of complex systems. This analysis is often at the cost of greatly increased processing requirements. Cloud computing, which provides on demand resources, can offset increased analysis requirements. While beneficial to researchers, adaption of clouds has been slow due to network and performance uncertainties. We compare the performance of cloud computers to clusters to make clear the advantages and limitations of clouds. Focus has been put on understanding how virtualization and the underlying network effects performance of High Performance Computing (HPC) applications. Collected results indicate that performance comparable to high performance clusters is achievable on cloud computers depending on the type of application run.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

VMD and NAMD are two major molecular dynamics simulation software packages, which can work together for mining structural information of bio-molecules. Carrying out such molecular dynamics simulations can help researchers to understand the roles and functions of various bio-molecules in life science research. Recently, clouds have provided HPC clusters on demand that allow users to benefit from their flexibility, elasticity, and lower costs. Although cloud computing promises to provide seamless access to HPC clusters through the abstraction of services, which hide the details of the underlying software and hardware infrastructure, users without in depth computing knowledge are still forced to cope with many low level system and programming details. Therefore, we have designed and developed a software plugin of VMD, which can provide an integrated framework for NAMD to be executed on Amazon EC2. The proposed Amazon EC2 Plugin for VMD frees users from performing many tedious computing tasks such as launching, connecting and terminating Amazon EC2 compute instances; configuring a HPC cluster; and installing middleware and software applications before the system is readily available for any scientific investigation. This allows VMD/NAMD users to spend less time getting applications to work on HPC clusters but more time for bio-research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic classification technique is an essential tool for network and system security in the complex environments such as cloud computing based environment. The state-of-the-art traffic classification methods aim to take the advantages of flow statistical features and machine learning techniques, however the classification performance is severely affected by limited supervised information and unknown applications. To achieve effective network traffic classification, we propose a new method to tackle the problem of unknown applications in the crucial situation of a small supervised training set. The proposed method possesses the superior capability of detecting unknown flows generated by unknown applications and utilizing the correlation information among real-world network traffic to boost the classification performance. A theoretical analysis is provided to confirm performance benefit of the proposed method. Moreover, the comprehensive performance evaluation conducted on two real-world network traffic datasets shows that the proposed scheme outperforms the existing methods in the critical network environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In modern computing paradigms, most computing systems, e.g., cluster computing, grid computing, cloud computing, the Internet, telecommunication networks, Cyber- Physical Systems (CPS), and Machine-to-Machine communication networks (M2M), are parallel and distributed systems. While providing improved expandability, manageability, efficiency, and reliability, parallel and distributed systems increase their security weaknesses to an unprecedented scale. As the system devices are widely connected, their vulnerabilities are shared by the entire system. Because tasks are allocated to, and information is exchanged among the system devices that may belong to different users, trust, security, and privacy issues have yet to be resolved. This special issue of the IEEE Transactions on Parallel and Distributed Systems (TPDS) highlights recent advances in trust, security, and privacy for emerging parallel and distributed systems. This special issue was initiated by Dr. Xu Li, Dr. Patrick McDaniel, Dr. Radha Poovendran, and Dr. Guojun Wang. Due to a large number of submissions, Dr. Zhenfu Cao, Dr. Keqiu Li, and Dr. Yang Xiang were later invited to the editorial team. Dr. Xu Li was responsible for coordinating the paper review process. In response to the call for papers, we received 150 effective submissions, out of which 24 are included in this special issue after rigorous review and careful revision, presenting an acceptance ratio of 16 percent. The accepted papers are divided into three groups, covering issues related to trust, security, and privacy, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud is becoming a dominant computing platform. Naturally, a question that arises is whether we can beat notorious DDoS attacks in a cloud environment. Researchers have demonstrated that the essential issue of DDoS attack and defense is resource competition between defenders and attackers. A cloud usually possesses profound resources and has full control and dynamic allocation capability of its resources. Therefore, cloud offers us the potential to overcome DDoS attacks. However, individual cloud hosted servers are still vulnerable to DDoS attacks if they still run in the traditional way. In this paper, we propose a dynamic resource allocation strategy to counter DDoS attacks against individual cloud customers. When a DDoS attack occurs, we employ the idle resources of the cloud to clone sufficient intrusion prevention servers for the victim in order to quickly filter out attack packets and guarantee the quality of the service for benign users simultaneously. We establish a mathematical model to approximate the needs of our resource investment based on queueing theory. Through careful system analysis and real-world data set experiments, we conclude that we can defeat DDoS attacks in a cloud environment. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transparent computing is an emerging computing paradigm where the users can enjoy any kind of service over networks on-demand with any devices, without caring about the underlying deployment details. In transparent computing, all software resources (even the OS) are stored on remote servers, from which the clients can request the resources for local execution in a block-streaming way. This paradigm has many benefits including cross-platform experience, user orientation, and platform independence. However, due to its fundamental features, e.g., separation of computation and storage in clients and servers respectively, and block-streaming-based scheduling and execution, transparent computing faces many new security challenges that may become its biggest obstacle. In this paper, we propose a Transparent Computing Security Architecture (TCSA), which builds user-controlled security for transparent computing by allowing the users to configure the desired security environments on demand. We envision, TCSA, which allows the users to take the initiative to protect their own data, is a promising solution for data security in transparent computing. © 2014 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-tenancy is a cloud computing phenomenon. Multiple instances of an application occupy and share resources from a large pool, allowing different users to have their own version of the same application running and coexisting on the same hardware but in isolated virtual spaces. In this position paper we survey the current landscape of multi-tenancy, laying out the challenges and complexity of software engineering where multi-tenancy issues are involved. Multitenancy allows cloud service providers to better utilise computing resources, supporting the development of more exible services to customers based on economy of scale, reducing overheads and infrastructural costs. Nevertheless, there are major challenges in migration from single tenant applications to multi-tenancy. These have not been fully explored in research or practice to date. In particular, the reengineering effort of multi-tenancy in Software-as-a-Service cloud applications requires many complex and important aspects that should be taken into consideration, such as security, scalability, scheduling, data isolation, etc. Our study emphasizes scheduling policies and cloud provisioning and deployment with regards to multi-tenancy issues. We employ CloudSim and MapReduce in our experiments to simulate and analyse multi-tenancy models, scenarios, performance, scalability, scheduling and reliability on cloud platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The success of cloud computing makes an increasing number of real-time applications such as signal processing and weather forecasting run in the cloud. Meanwhile, scheduling for real-time tasks is playing an essential role for a cloud provider to maintain its quality of service and enhance the system's performance. In this paper, we devise a novel agent-based scheduling mechanism in cloud computing environment to allocate real-time tasks and dynamically provision resources. In contrast to traditional contract net protocols, we employ a bidirectional announcement-bidding mechanism and the collaborative process consists of three phases, i.e., basic matching phase, forward announcement-bidding phase and backward announcement-bidding phase. Moreover, the elasticity is sufficiently considered while scheduling by dynamically adding virtual machines to improve schedulability. Furthermore, we design calculation rules of the bidding values in both forward and backward announcement-bidding phases and two heuristics for selecting contractors. On the basis of the bidirectional announcement-bidding mechanism, we propose an agent-based dynamic scheduling algorithm named ANGEL for real-time, independent and aperiodic tasks in clouds. Extensive experiments are conducted on CloudSim platform by injecting random synthetic workloads and the workloads from the last version of the Google cloud tracelogs to evaluate the performance of our ANGEL. The experimental results indicate that ANGEL can efficiently solve the real-time task scheduling problem in virtualized clouds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data sharing has never been easier with the advances of cloud computing, and an accurate analysis on the shared data provides an array of benefits to both the society and individuals. Data sharing with a large number of participants must take into account several issues, including efficiency, data integrity and privacy of data owner. Ring signature is a promising candidate to construct an anonymous and authentic data sharing system. It allows a data owner to anonymously authenticate his data which can be put into the cloud for storage or analysis purpose. Yet the costly certificate verification in the traditional public key infrastructure (PKI) setting becomes a bottleneck for this solution to be scalable. Identity-based (ID-based) ring signature, which eliminates the process of certificate verification, can be used instead. In this paper, we further enhance the security of ID-based ring signature by providing forward security: If a secret key of any user has been compromised, all previous generated signatures that include this user still remain valid. This property is especially important to any large scale data sharing system, as it is impossible to ask all data owners to re-authenticate their data even if a secret key of one single user has been compromised. We provide a concrete and efficient instantiation of our scheme, prove its security and provide an implementation to show its practicality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smartphone technology has become more popular and innovative over the last few years, and technology companies are now introducing wearable devices into the market. By emerging and converging with technologies such as Cloud, Internet of Things (IoT) and Virtualization, requirements to personal sensor devices are immense and essential to support existing networks, e.g. mobile health (mHealth) as well as IoT users. Traditional physiological and biological medical sensors in mHealth provide health data either periodically or on-demand. Both of these situations can cause rapid battery consumption, consume significant bandwidth, and raise privacy issues, because these sensors do not consider or understand sensor status when converged together. The aim of this research is to provide a novel approach and solution to managing and controlling personal sensors that can be used in various areas such as the health, military, aged care, IoT and sport. This paper presents an inference system to transfer health data collected by personal sensors efficiently and effectively to other networks in a secure and effective manner without burdening workload on sensor devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fog Computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device.