174 resultados para Cloud OS, cloud operating system, cloud computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big Data technologies are exciting cutting-edge technologies that generate, collect, store and analyse tremendous amount of data. Like any other IT revolution, Big Data technologies also have big challenges that are obstructing it to be adopted by wider community or perhaps impeding to extract value from Big Data with pace and accuracy it is promising. In this paper we first offer an alternative view of «Big Data Cloud» with the main aim to make this complex technology easy to understand for new researchers and identify gaps efficiently. In our lab experiment, we have successfully implemented cyber-attacks on Apache Hadoop's management interface «Ambari». On our thought about «attackers only need one way in», we have attacked the Apache Hadoop's management interface, successfully turned down all communication between Ambari and Hadoop's ecosystem and collected performance data from Ambari Virtual Machine (VM) and Big Data Cloud hypervisor. We have also detected these cyber-attacks with 94.0187% accurateness using modern machine learning algorithms. From the existing researchs, no one has ever attempted similar experimentation in detection of cyber-attacks on Hadoop using performance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of the strong demands of physical resources of big data, it is an effective and efficient way to store and process big data in clouds, as cloud computing allows on-demand resource provisioning. With the increasing requirements for the resources provisioned by cloud platforms, the Quality of Service (QoS) of cloud services for big data management is becoming significantly important. Big data has the character of sparseness, which leads to frequent data accessing and processing, and thereby causes huge amount of energy consumption. Energy cost plays a key role in determining the price of a service and should be treated as a first-class citizen as other QoS metrics, because energy saving services can achieve cheaper service prices and environmentally friendly solutions. However, it is still a challenge to efficiently schedule Virtual Machines (VMs) for service QoS enhancement in an energy-aware manner. In this paper, we propose an energy-aware dynamic VM scheduling method for QoS enhancement in clouds over big data to address the above challenge. Specifically, the method consists of two main VM migration phases where computation tasks are migrated to servers with lower energy consumption or higher performance to reduce service prices and execution time. Extensive experimental evaluation demonstrates the effectiveness and efficiency of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The telecommunication industry is entering a new era. The increased traffic demands imposed by the huge number of always-on connections require a quantum leap in the field of enabling techniques. Furthermore, subscribers expect ever increasing quality of experience with its joys and wonders, while network operators and service providers aim for cost-efficient networks. These requirements require a revolutionary change in the telecommunications industry, as shown by the success of virtualization in the IT industry, which is now driving the deployment and expansion of cloud computing. Telecommunications providers are currently rethinking their network architecture from one consisting of a multitude of black boxes with specialized network hardware and software to a new architecture consisting of “white box” hardware running a multitude of specialized network software. This network software may be data plane software providing network functions virtualization (NVF) or control plane software providing centralized network management — software defined networking (SDN). It is expected that these architectural changes will permeate networks as wide ranging in size as the Internet core networks, to metro networks, to enterprise networks and as wide ranging in functionality as converged packet-optical networks, to wireless core networks, to wireless radio access networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of cloud computing allows users to flexibly store, re-compute or transfer large generated datasets with multiple cloud service providers. However, due to the pay-As-you-go model, the total cost of using cloud services depends on the consumption of storage, computation and bandwidth resources which are three key factors for the cost of IaaS-based cloud resources. In order to reduce the total cost for data, given cloud service providers with different pricing models on their resources, users can flexibly choose a cloud service to store a generated dataset, or delete it and choose a cloud service to regenerate it whenever reused. However, finding the minimum cost is a complicated yet unsolved problem. In this paper, we propose a novel algorithm that can calculate the minimum cost for storing and regenerating datasets in clouds, i.e. whether datasets should be stored or deleted, and furthermore where to store or to regenerate whenever they are reused. This minimum cost also achieves the best trade-off among computation, storage and bandwidth costs in multiple clouds. Comprehensive analysis and rigid theorems guarantee the theoretical soundness of the paper, and general (random) simulations conducted with popular cloud service providers' pricing models demonstrate the excellent performance of our approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile cloud computing has been involved as a key enabling technology to overcome the physical limitations of mobile devices towards scalable and flexible mobile services. In the mobile cloud environment, searchable encryption, which enables directly search over encrypted data, is a key technique to maintain both the privacy and usability of outsourced data in cloud. On addressing the issue, many research efforts resolve to using the searchable symmetric encryption (SSE) and searchable public-key encryption (SPE). In this paper, we improve the existing works by developing a more practical searchable encryption technique, which can support dynamic updating operations in the mobile cloud applications. Specifically, we make our efforts on taking the advantages of both SSE and SPE techniques, and propose PSU, a Personalized Search scheme over encrypted data with efficient and secure Updates in mobile cloud. By giving thorough security analysis, we demonstrate that PSU can achieve a high security level. Using extensive experiments in a realworld mobile environment, we show that PUS is more efficient compared with the existing proposals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software based Distributed Shared Memory (DSM) systems have been the focus of considerable research effort, primarily in improving performance and consistency protocols. Unfortunately, computer clusters present a number of challenges for any DSM systems that are not solvable through consistency protocols alone. These challenges relate to the ability of DSM systems to adjust to load fluctuations, computers being added/removed from the cluster, to deal with faults, and the ability to use DSM objects larger than the available physical memory. We present here a proposal for the Synergy Distributed Shared Memory System and its integration with the virtual memory, group communication and process migration services of the Genesis Cluster Operating System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considerable research and development has been invested in software Distributed Shared Memory (DSM). The primary focus of this work has traditionally been on high performance and consistency protocols. Unfortunately, clusters present a number of challenges for any DSM systems not solvable through consistency protocols alone. These challenges relate to the ability of DSM systems to adjust to load fluctuations, computers being added/removed from the cluster, to deal with faults, and the ability to use DSM objects larger than the available physical memory. This paper introduces the Synergy DSM System and its integration with the virtual memory, group communication and process migration services of the Genesis Cluster Operating System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human body was used to illustrate an Autonomic Computing system that possesses self-knowledge, self-configuration, self-optimization, self-healing, and self-protection, knowledge of its environment and user friendliness properties. Autonomic Computing was identified by IBM as one of the Grand Challenges. Many researchers and research groups have responded positively to the challenge by initiating research around one or two of the characteristics
identified by IBM as the requirements for Autonomic Computing. One of the areas that could benefit from the comprehensive approach created by the Autonomic Computing vision is parallel processing on nondedicated clusters. This paper shows a general design of services and initial implementation of a system that moves parallel processing on clusters to the computing mainstream using the Autonomic Computing vision.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamic deployment of Web services is a term used frequently when describing the selection and deployment of a service to a grid host. Although current grid systems (such as Globus) provide dynamic deployment, the requirements of the service being deployed are not considered. Therefore truly dynamic deployment cannot be achieved as the services deployed are restricted to the grid system used. We present a dynamic deployment mechanism as part of self configuration in a service oriented grid environment. The dynamic deployment mechanism takes the requirements of the service into consideration, including parameters such as the operating system required to execute the service, the required software libraries, any additional required software packages, price and Quality of Service (QoS) parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future of computing lies with distributed systems, i.e. a network of workstations controlled by a modern distributed operating system. By supporting load balancing and parallel execution, the overall performance of a distributed system can be improved dramatically. Process migration, the act of moving a running process from a highly loaded machine to a lightly loaded machine, could be used to support load balancing, parallel execution, reliability etc. This thesis identifies the problems past process migration facilities have had and determines the possible differing strategies that can be used to resolve these problems. The result of this analysis has led to a new design philosophy. This philosophy requires the design of a process migration facility and the design of an operating system to be conducted in parallel. Modern distributed operating systems follow the microkernel and client/server paradigms. Applying these design paradigms, in conjunction with the requirements of both process migration and a distributed operating system, results in a system where each resource is controlled by a separate server process. However, a process is a complex resource composed of simple resources such as data structures, an address space and communication state. For this reason, a process migration facility does not directly migrate the resources of a process. Instead, it requests the appropriate servers to transfer the resources. This novel solution yields a modular, high performance facility that is easy to create, debug and maintain. Furthermore, the design easily incorporates providing multiple migration strategies. In order to verify the validity of this design, a process migration facility was developed and tested within RHODOS (ResearcH Oriented Distributed Operating System). RHODOS is a modern microkernel and client/server based distributed operating system. In RHODOS, a process is composed of at least three separate resources: process state - maintained by a process manager, address space - maintained by a memory manager and communication state - maintained by an InterProcess Communication Manager (IPCM). The RHODOS multiple strategy migration manager utilises the services of the process, memory and IPC Managers to migrate the resources of a process. Performance testing of this facility indicates that this design is as fast or better than existing systems which use faster hardware. Furthermore, by studying the results of the performance test ing, the conditions under which a particular strategy should be employed have been identified. This thesis also addresses heterogeneous process migration. The current trend is to have islands of homogeneous workstations amid a sea of heterogeneity. From this situation and the current literature on the topic, heterogeneous process migration can be seen as too inefficient for general use. Instead, only homogeneous workstations should be used for process migration. This implies a need to locate homogeneous workstations. Entities called traders, which store and disseminate knowledge about the resources of several workstations, should be used to provide resource discovery. Resource discovery will enable the detection of homogeneous workstations to which processes can be migrated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current attempts to manage parallel applications on Clusters of Workstations (COWs) have either generally followed the parallel execution environment approach or been extensions to existing network operating systems, both of which do not provide complete or satisfactory solutions. The efficient and transparent management of parallelism within the COW environment requires enhanced methods of process instantiation, mapping of parallel process to workstations, maintenance of process relationships, process communication facilities, and process coordination mechanisms. The aim of this research is to synthesise, design, develop and experimentally study a system capable of efficiently and transparently managing SPMD parallelism on a COW. This system should both improve the performance of SPMD based parallel programs and relieve the programmer from the involvement into parallelism management in order to allow them to concentrate on application programming. It is also the aim of this research to show that such a system, to achieve these objectives, is best achieved by adding new special services and exploiting the existing services of a client/server and microkernel based distributed operating system. To achieve these goals the research methods of the experimental computer science should be employed. In order to specify the scope of this project, this work investigated the issues related to parallel processing on COWs and surveyed a number of relevant systems including PVM, NOW and MOSIX. It was shown that although the MOSIX system provide a number of good services related to parallelism management, none of the system forms a complete solution. The problems identified with these systems include: instantiation services that are not suited to parallel processing; duplication of services between the parallelism management environment and the operating system; and poor levels of transparency. A high performance and transparent system capable of managing the execution of SPMD parallel applications was synthesised and the specific services of process instantiation, process mapping and process interaction detailed. The process instantiation service designed here provides the capability to instantiate parallel processes using either creation or duplication methods and also supports multiple and group based instantiation which is specifically design for SPMD parallel processing. The process mapping service provides the combination of process allocation and dynamic load balancing to ensure the load of a COW remains balanced not only at the time a parallel program is initialised but also during the execution of the program. The process interaction service guarantees to maintain transparently process relationships, communications and coordination services between parallel processes regardless of their location within the COW. The combination of these services provides an original architecture and organisation of a system that is capable of fully managing the execution of SPMD parallel applications on a COW. A logical design of a parallelism management system was developed derived from the synthesised system and was shown that it should ideally be based on a distributed operating system employing the client server model. The client/server based distributed operating system provides the level of transparency, modularity and flexibility necessary for a complete parallelism management system. The services identified in the synthesised system have been mapped to a set of server processes including: Process Instantiation Server providing advanced multiple and group based process creation and duplication; Process Mapping Server combining load collection, process allocation and dynamic load balancing services; and Process Interaction Server providing transparent interprocess communication and coordination. A Process Migration Server was also identified as vital to support both the instantiation and mapping servers. The RHODOS client/server and microkernel based distributed operating system was selected to carry out research into the detailed design and to be used for the implementation this parallelism management system. RHODOS was enhanced to provide the required servers and resulted in the development of the REX Manager, Global Scheduler and Process Migration Manager to provide the services of process instantiation, mapping and migration, respectively. The process interaction services were already provided within RHODOS and only required some extensions to the existing Process Manager and IPC Managers. Through a variety of experiments it was shown that when this system was used to support the execution of SPMD parallel applications the overall execution times were improved, especially when multiple and group based instantiation services are employed. The RHODOS PMS was also shown to greatly reduce the programming burden experienced by users when writing SPMD parallel applications by providing a small set of powerful primitives specially designed to support parallel processing. The system was also shown to be applicable and has been used in a variety of other research areas such as Distributed Shared Memory, Parallelising Compilers and assisting the port of PVM to the RHODOS system. The RHODOS Parallelism Management System (PMS) provides a unique and creative solution to the problem of transparently and efficiently controlling the execution of SPMD parallel applications on COWs. Combining advanced services such as multiple and group based process creation and duplication; combined process allocation and dynamic load balancing; and complete COW wide transparency produces a totally new system that addresses many of the problems not addressed in other systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 The penetration of social networking platforms such as Facebook is becoming pervasive in education, along with mobile applications (apps) and mobile devices. Students are using these technologies and apps to organise their learning material. Social media via apps is the most popular activity among college students. In this paper we discuss how teachers could take advantage of Facebook social media platform to promote community-based-learning environment that is flexible, portable and challengeable. We describe how this could be achieved with no restriction to any particular mobile device brand or operating system and how student would simply bring their own device (BYOD).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 This evolution of mobile technologies and application integration in education across the world has provided a shift to a new learning environment via various mobile platforms.Educational institutions globally are missing to identify specific mobile technologies initiatives and strategies as a method to evaluate these mobile technologies and to expose both students and teachers to the potential it engenders.This panel will undertake a cross country comparison among culturally diverse countries: Turkey,UAE,USA,Lebanon,Iceland,Israel,Japan,Germany.Questions will be raised such as:Why some countries are branding mobile learning and their integration of these technologies has been made device specific, app specific and operating system specific?Is this the right approach?The digital gap between countries will be discussed?Availability access, barriers and limitations for some countries are described.We will try to figure out similarities,differences and challenges among these countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

 Mobile computing is taking the educational institutions into a new era of instruction. Educational institutions globally are opting for new mobile devices to integrate, and it seems that the vast majority are integrating the iPad without even looking at other options, they are unintentionally branding mobile learning. We believe that mobile learning should not be branded, should not be restricted and should not be made device specific, operating system specific, controlled and brand specific. This paper is based on a global panel discussion entitled: Is this an iPad Revolution or Mobile Learning Revolution. Also this paper presents an argument as to why is the iPad dominating in education with a focus on the current iPad initiatives in the UAE, few possible assumptions on why educational institutions are opting for the iPad are discussed, and some suggested recommendations on what educational institutions should know before making the decision about this integration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, the hardware and software design for using a TF card in debugging an embedded system are described. The used hardware platform is designed based on a PXA310 application processor. The Android open source operating system is used as the software platform. The design of the connection circuit between the application processor and the TF card is introduced first. Secondly, the design of the TF card driver program and the method for Android system to mount the TF card are described. In designing the TF driver program, an SPI operation mode and FAT32 file system are used. The transplant of the FAT32 file system is presented more detail. Finally, the paper introduced the system debugging and the test results are given for the TF card used in a video data acquisition unit of a video monitoring. It is shown that high speed data exchange and good universal property can be obtained by using a TF card to download a system image during developing and debugging. The TF card used in debugging can be used as a mass storage in the embedded product without the need of changing the design for debugging the system and it is also convenient for a user to upgrade operating system.