899 resultados para hybrid cloud computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Establishing trust for resource sharing and collaboration has become an important issue in distributed computing environment. In this paper, we investigate the problem of establishing trust in hybrid cloud computing environments. As the scope of federated cloud computing enlarges to ubiquitous and pervasive computing, there will be a need to assess and maintain the trustworthiness of the cloud computing entities. We present a fully distributed framework that enable trust-based cloud customer and cloud service provider interactions. The framework aids a service consumer in assigning an appropriate weight to the feedback of different raters regarding a prospective service provider. Based on the framework, we developed a mechanism for controlling falsified feedback ratings from iteratively exerting trust level contamination due to falsified feedback ratings. The experimental analysis shows that the proposed framework successfully dilutes the effects of falsified feedback ratings, thereby facilitating accurate and fair assessment of the service reputations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resource provisiomng is an important and challenging problem in the large-scale distributed systems such as Cloud computing environments. Resource management issues such as Quality of Service (QoS) further exacerbate the resource provisioning problem. Furthermore, with the increasing functionality and complexity of Cloud computing, resource failures are inevitable. Therefore, the question we address in this paper is how to provision resources to applications in the presence of resource failures in a hybrid Cloud computing environment. To this end, we propose three Cloud resource provisioning policies where we utilize workflow applications to drive the system workload. The proposed strategies take into account the workload model and the failure correlations to redirect requests to appropriate Cloud providers. Using real failure traces and workload models, we evaluated the performance and monetary cost of the proposed policies. The results of our experiments show that we can decrease the deadline violation rate of users' requests to as low as 20% with a limited cost on Amazon public Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled – replicated or deleted, to accommodate the user’s load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem’s knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the compelling case for moving towards cloud computing, the upstream oil & gas industry faces several technical challenges—most notably, a pronounced emphasis on data security, a reliance on extremely large data sets, and significant legacy investments in information technology infrastructure—that make a full migration to the public cloud difficult at present. Private and hybrid cloud solutions have consequently emerged within the industry to yield as much benefit from cloud-based technologies as possible while working within these constraints. This paper argues, however, that the move to private and hybrid clouds will very likely prove only to be a temporary stepping stone in the industry's technological evolution. By presenting evidence from other market sectors that have faced similar challenges in their journey to the cloud, we propose that enabling technologies and conditions will probably fall into place in a way that makes the public cloud a far more attractive option for the upstream oil & gas industry in the years ahead. The paper concludes with a discussion about the implications of this projected shift towards the public cloud, and calls for more of the industry's services to be offered through cloud-based “apps.”

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adopting a multi-theoretical approach, I examine external auditors’ perceptions of the reasons why organizations do or do not adopt cloud computing. I interview forensic accountants and IT experts about the adoption, acceptance, institutional motives, and risks of cloud computing. Although the medium to large accounting firms where the external auditors worked almost exclusively used private clouds, both private and public cloud services were gaining a foothold among many of their clients. Despite the advantages of cloud computing, data confidentiality and the involvement of foreign jurisdictions remain a concern, particularly if the data are moved outside Australia. Additionally, some organizations seem to understand neither the technology itself nor their own requirements, which may lead to poorly negotiated contracts and service agreements. To minimize the risks associated with cloud computing, many organizations turn to hybrid solutions or private clouds that include national or dedicated data centers. To the best of my knowledge, this is the first empirical study that reports on cloud computing adoption from the perspectives of external auditors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-user videoconferencing systems offer communication between more than two users, who are able to interact through their webcams, microphones and other components. The use of these systems has been increased recently due to, on the one hand, improvements in Internet access, networks of companies, universities and houses, whose available bandwidth has been increased whilst the delay in sending and receiving packets has decreased. On the other hand, the advent of Rich Internet Applications (RIA) means that a large part of web application logic and control has started to be implemented on the web browsers. This has allowed developers to create web applications with a level of complexity comparable to traditional desktop applications, running on top of the Operating Systems. More recently the use of Cloud Computing systems has improved application scalability and involves a reduction in the price of backend systems. This offers the possibility of implementing web services on the Internet with no need to spend a lot of money when deploying infrastructures and resources, both hardware and software. Nevertheless there are not many initiatives that aim to implement videoconferencing systems taking advantage of Cloud systems. This dissertation proposes a set of techniques, interfaces and algorithms for the implementation of videoconferencing systems in public and private Cloud Computing infrastructures. The mechanisms proposed here are based on the implementation of a basic videoconferencing system that runs on the web browser without any previous installation requirements. To this end, the development of this thesis starts from a RIA application with current technologies that allow users to access their webcams and microphones from the browser, and to send captured data through their Internet connections. Furthermore interfaces have been implemented to allow end users to participate in videoconferencing rooms that are managed in different Cloud provider servers. To do so this dissertation starts from the results obtained from the previous techniques and backend resources were implemented in the Cloud. A traditional videoconferencing service which was implemented in the department was modified to meet typical Cloud Computing infrastructure requirements. This allowed us to validate whether Cloud Computing public infrastructures are suitable for the traffic generated by this kind of system. This analysis focused on the network level and processing capacity and stability of the Cloud Computing systems. In order to improve this validation several other general considerations were taken in order to cover more cases, such as multimedia data processing in the Cloud, as research activity has increased in this area in recent years. The last stage of this dissertation is the design of a new methodology to implement these kinds of applications in hybrid clouds reducing the cost of videoconferencing systems. Finally, this dissertation opens up a discussion about the conclusions obtained throughout this study, resulting in useful information from the different stages of the implementation of videoconferencing systems in Cloud Computing systems. RESUMEN Los sistemas de videoconferencia multiusuario permiten la comunicación entre más de dos usuarios que pueden interactuar a través de cámaras de video, micrófonos y otros elementos. En los últimos años el uso de estos sistemas se ha visto incrementado gracias, por un lado, a la mejora de las redes de acceso en las conexiones a Internet en empresas, universidades y viviendas, que han visto un aumento del ancho de banda disponible en dichas conexiones y una disminución en el retardo experimentado por los datos enviados y recibidos. Por otro lado también ayudó la aparación de las Aplicaciones Ricas de Internet (RIA) con las que gran parte de la lógica y del control de las aplicaciones web comenzó a ejecutarse en los mismos navegadores. Esto permitió a los desarrolladores la creación de aplicaciones web cuya complejidad podía compararse con la de las tradicionales aplicaciones de escritorio, ejecutadas directamente por los sistemas operativos. Más recientemente el uso de sistemas de Cloud Computing ha mejorado la escalabilidad y el abaratamiento de los costes para sistemas de backend, ofreciendo la posibilidad de implementar servicios Web en Internet sin la necesidad de grandes desembolsos iniciales en las áreas de infraestructuras y recursos tanto hardware como software. Sin embargo no existen aún muchas iniciativas con el objetivo de realizar sistemas de videoconferencia que aprovechen las ventajas del Cloud. Esta tesis doctoral propone un conjunto de técnicas, interfaces y algoritmos para la implentación de sistemas de videoconferencia en infraestructuras tanto públicas como privadas de Cloud Computing. Las técnicas propuestas en la tesis se basan en la realización de un servicio básico de videoconferencia que se ejecuta directamente en el navegador sin la necesidad de instalar ningún tipo de aplicación de escritorio. Para ello el desarrollo de esta tesis parte de una aplicación RIA con tecnologías que hoy en día permiten acceder a la cámara y al micrófono directamente desde el navegador, y enviar los datos que capturan a través de la conexión de Internet. Además se han implementado interfaces que permiten a usuarios finales la participación en salas de videoconferencia que se ejecutan en servidores de proveedores de Cloud. Para ello se partió de los resultados obtenidos en las técnicas anteriores de ejecución de aplicaciones en el navegador y se implementaron los recursos de backend en la nube. Además se modificó un servicio ya existente implementado en el departamento para adaptarlo a los requisitos típicos de las infraestructuras de Cloud Computing. Alcanzado este punto se procedió a analizar si las infraestructuras propias de los proveedores públicos de Cloud Computing podrían soportar el tráfico generado por los sistemas que se habían adaptado. Este análisis se centró tanto a nivel de red como a nivel de capacidad de procesamiento y estabilidad de los sistemas. Para los pasos de análisis y validación de los sistemas Cloud se tomaron consideraciones más generales para abarcar casos como el procesamiento de datos multimedia en la nube, campo en el que comienza a haber bastante investigación en los últimos años. Como último paso se ideó una metodología de implementación de este tipo de aplicaciones para que fuera posible abaratar los costes de los sistemas de videoconferencia haciendo uso de clouds híbridos. Finalmente en la tesis se abre una discusión sobre las conclusiones obtenidas a lo largo de este amplio estudio, obteniendo resultados útiles en las distintas etapas de implementación de los sistemas de videoconferencia en la nube.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents the formal definition of a novel Mobile Cloud Computing (MCC) extension of the Networked Autonomic Machine (NAM) framework, a general-purpose conceptual tool which describes large-scale distributed autonomic systems. The introduction of autonomic policies in the MCC paradigm has proved to be an effective technique to increase the robustness and flexibility of MCC systems. In particular, autonomic policies based on continuous resource and connectivity monitoring help automate context-aware decisions for computation offloading. We have also provided NAM with a formalization in terms of a transformational operational semantics in order to fill the gap between its existing Java implementation NAM4J and its conceptual definition. Moreover, we have extended NAM4J by adding several components with the purpose of managing large scale autonomic distributed environments. In particular, the middleware allows for the implementation of peer-to-peer (P2P) networks of NAM nodes. Moreover, NAM mobility actions have been implemented to enable the migration of code, execution state and data. Within NAM4J, we have designed and developed a component, denoted as context bus, which is particularly useful in collaborative applications in that, if replicated on each peer, it instantiates a virtual shared channel allowing nodes to notify and get notified about context events. Regarding the autonomic policies management, we have provided NAM4J with a rule engine, whose purpose is to allow a system to autonomously determine when offloading is convenient. We have also provided NAM4J with trust and reputation management mechanisms to make the middleware suitable for applications in which such aspects are of great interest. To this purpose, we have designed and implemented a distributed framework, denoted as DARTSense, where no central server is required, as reputation values are stored and updated by participants in a subjective fashion. We have also investigated the literature regarding MCC systems. The analysis pointed out that all MCC models focus on mobile devices, and consider the Cloud as a system with unlimited resources. To contribute in filling this gap, we defined a modeling and simulation framework for the design and analysis of MCC systems, encompassing both their sides. We have also implemented a modular and reusable simulator of the model. We have applied the NAM principles to two different application scenarios. First, we have defined a hybrid P2P/cloud approach where components and protocols are autonomically configured according to specific target goals, such as cost-effectiveness, reliability and availability. Merging P2P and cloud paradigms brings together the advantages of both: high availability, provided by the Cloud presence, and low cost, by exploiting inexpensive peers resources. As an example, we have shown how the proposed approach can be used to design NAM-based collaborative storage systems based on an autonomic policy to decide how to distribute data chunks among peers and Cloud, according to cost minimization and data availability goals. As a second application, we have defined an autonomic architecture for decentralized urban participatory sensing (UPS) which bridges sensor networks and mobile systems to improve effectiveness and efficiency. The developed application allows users to retrieve and publish different types of sensed information by using the features provided by NAM4J's context bus. Trust and reputation is managed through the application of DARTSense mechanisms. Also, the application includes an autonomic policy that detects areas characterized by few contributors, and tries to recruit new providers by migrating code necessary to sensing, through NAM mobility actions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Scientific workflow is a complicated data intensive application. How to achieve an effective data placement schema in hybrid cloud environment has become a crucial issue nowadays, especially with the new challenges brought by the security issues. Traditional data placement strategies usually adopt load balancing-based partition model to allocate datasets. Although these data placement schemas can have good performance in load balancing, their data transfer time may not be optimal. In contrast to traditional strategies, this paper focuses on the hybrid cloud environment and proposes a data dependency destruction-based partition model to achieve the minimal data dependency destruction partition. In addition, it presents a novel datacenter-oriented data placement strategy. This strategy allocates high dependency datasets to one datacenter according to the new partition model and thus significantly reduces data transfer time between datacenters. Experimental results show that the proposed strategy can effectively reduce data transfer time during workflow's execution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the emergence of the big data age, the issue of how to obtain valuable knowledge from a dataset efficiently and accurately has attracted increasingly attention from both academia and industry. This paper presents a Parallel Random Forest (PRF) algorithm for big data on the Apache Spark platform. The PRF algorithm is optimized based on a hybrid approach combining data-parallel and task-parallel optimization. From the perspective of data-parallel optimization, a vertical data-partitioning method is performed to reduce the data communication cost effectively, and a data-multiplexing method is performed is performed to allow the training dataset to be reused and diminish the volume of data. From the perspective of task-parallel optimization, a dual parallel approach is carried out in the training process of RF, and a task Directed Acyclic Graph (DAG) is created according to the parallel training process of PRF and the dependence of the Resilient Distributed Datasets (RDD) objects. Then, different task schedulers are invoked for the tasks in the DAG. Moreover, to improve the algorithm's accuracy for large, high-dimensional, and noisy data, we perform a dimension-reduction approach in the training process and a weighted voting approach in the prediction process prior to parallelization. Extensive experimental results indicate the superiority and notable advantages of the PRF algorithm over the relevant algorithms implemented by Spark MLlib and other studies in terms of the classification accuracy, performance, and scalability.