758 resultados para cloud computing.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
The Mobile Cloud Networking project develops among others, several virtualized services and applications, in particular: (1) IP Multimedia Subsystem as a Service that gives the possibility to deploy a virtualized and on-demand instance of the IP Multimedia Subsystem platform, (2) Digital Signage Service as a Service that is based on a re-designed Digital Signage Service architecture, adopting the cloud computing principles, and (3) Information Centric Networking/Content Delivery Network as a Service that is used for distributing, caching and migrating content from other services. Possible designs for these virtualized services and applications have been identified and are being implemented. In particular, the architectures of the mentioned services were specified, adopting cloud computing principles, such as infrastructure sharing, elasticity, on-demand and pay-as-you-go. The benefits of Reactive Programming paradigm are presented in the context of Interactive Cloudified Digital Signage services in a Mobile Cloud Platform, as well as the benefit of interworking between different Mobile Cloud Networking Services as Digital Signage Service and Content Delivery Network Service for better performance of Video on Demand content deliver. Finally, the management of Service Level Agreements and the support of rating, charging and billing has also been considered and defined.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
The number of online real-time streaming services deployed over network topologies like P2P or centralized ones has remarkably increased in the recent years. This has revealed the lack of networks that are well prepared to respond to this kind of traffic. A hybrid distribution network can be an efficient solution for real-time streaming services. This paper contains the experimental results of streaming distribution in a hybrid architecture that consist of mixed connections among P2P and Cloud nodes that can interoperate together. We have chosen to represent the P2P nodes as Planet Lab machines over the world and the cloud nodes using a Cloud provider's network. First we present an experimental validation of the Cloud infrastructure's ability to distribute streaming sessions with respect to some key streaming QoS parameters: jitter, throughput and packet losses. Next we show the results obtained from different test scenarios, when a hybrid distribution network is used. The scenarios measure the improvement of the multimedia QoS parameters, when nodes in the streaming distribution network (located in different continents) are gradually moved into the Cloud provider infrastructure. The overall conclusion is that the QoS of a streaming service can be efficiently improved, unlike in traditional P2P systems and CDN, by deploying a hybrid streaming architecture. This enhancement can be obtained by strategic placing of certain distribution network nodes into the Cloud provider infrastructure, taking advantage of the reduced packet loss and low latency that exists among its datacenters.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
Cloud computing has seen an impressive growth in recent years, with virtualization technologies being massively adopted to create IaaS (Infrastructure as a Service) public and private solutions. Today, the interest is shifting towards the PaaS (Platform as a Service) model, which allows developers to abstract from the execution platform and focus only on the functionality. There are several public PaaS offerings available, but currently no private PaaS solution is ready for production environments. To fill this gap a new solution must be developed. In this paper we present a key element for enabling this model: a cloud repository based on the OSGi component model. The repository stores, manages, provisions and resolves the dependencies of PaaS software components and services. This repository can federate with other repositories located in the same or different clouds, both private and public. This way, dependencies can be fulfilled collaboratively, and new business models can be implemented.
Resumo:
El mundo tecnológico está cambiando hacia la optimización en la gestión de recursos gracias a la poderosa influencia de tecnologías como la virtualización y la computación en la nube (Cloud Computing). En esta memoria se realiza un acercamiento a las mismas, desde las causas que las motivaron hasta sus últimas tendencias, pasando por la identificación de sus principales características, ventajas e inconvenientes. Por otro lado, el Hogar Digital es ya una realidad para la mayoría de los seres humanos. En él se dispone de acceso a múltiples tipos de redes de telecomunicaciones (3G, 4G, WI-FI, ADSL…) con más o menos capacidad pero que permiten conexiones a internet desde cualquier parte, en todo momento, y con prácticamente cualquier dispositivo (ordenadores personales, smartphones, tabletas, televisores…). Esto es aprovechado por las empresas para ofrecer todo tipo de servicios. Algunos de estos servicios están basados en el cloud computing sobre todo ofreciendo almacenamiento en la nube a aquellos dispositivos con capacidad reducida, como son los smarthphones y las tabletas. Ese espacio de almacenamiento normalmente está en los servidores bajo el control de grandes compañías. Guardar documentos, videos, fotos privadas sin tener la certeza de que estos no son consultados por alguien sin consentimiento, puede despertar en el usuario cierto recelo. Para estos usuarios que desean control sobre su intimidad, se ofrece la posibilidad de que sea el propio usuario el que monte sus propios servidores y su propio servicio cloud para compartir su información privada sólo con sus familiares y amigos o con cualquiera al que le dé permiso. Durante el proyecto se han comparado diversas soluciones, la mayoría de código abierto y de libre distribución, que permiten desplegar como mínimo un servicio de almacenamiento accesible a través de Internet. Algunas de ellas lo complementan con servicios de streaming tanto de música como de videos, compartición y sincronización de documentos entre múltiples dispositivos, calendarios, copias de respaldo (backups), virtualización de escritorios, versionado de ficheros, chats, etc. El proyecto finaliza con una demostración de cómo utilizar dispositivos de un hogar digital interactuando con un servidor Cloud, en el que previamente se ha instalado y configurado una de las soluciones comparadas. Este servidor quedará empaquetado en una máquina virtual para que sea fácilmente transportable e utilizable. ABSTRACT. The technological world is changing towards optimizing resource management thanks to the powerful influence of technologies such as Virtualization and Cloud Computing. This document presents a closer approach to them, from the causes that have motivated to their last trends, as well as showing their main features, advantages and disadvantages. In addition, the Digital Home is a reality for most humans. It provides access to multiple types of telecommunication networks (3G, 4G, WI-FI, ADSL...) with more or less capacity, allowing Internet connections from anywhere, at any time, and with virtually any device (computer personal smartphones, tablets, televisions...).This is used by companies to provide all kinds of services. Some of these services offer storage on the cloud to devices with limited capacity, such as smartphones and tablets. That is normally storage space on servers under the control of important companies. Saving private documents, videos, photos, without being sure that they are not viewed by anyone without consent, can wake up suspicions in some users. For those users who want control over their privacy, it offers the possibility that it is the user himself to mount his own server and its own cloud service to share private information only with family and friends or with anyone with consent. During the project I have compared different solutions, most open source and with GNU licenses, for deploying one storage facility accessible via the Internet. Some supplement include streaming services of music , videos or photos, sharing and syncing documents across multiple devices, calendars, backups, desktop virtualization, file versioning, chats... The project ends with a demonstration of how to use our digital home devices interacting with a cloud server where one of the solutions compared is installed and configured. This server will be packaged in a virtual machine to be easily transportable and usable.
Resumo:
With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.
Resumo:
New technologies such as, the new Information and Communication Technology ICT, break new paths and redefines the way we understand business, the Cloud Computing is one of them. The on demand resource gathering and the per usage payment scheme are now commonplace, and allows companies to save on their ICT investments. Despite the importance of this issue, we still lack methodologies that help companies, to develop applications oriented for its exploitation in the Cloud. In this study we aim to fill this gap and propose a methodology for the development of ICT applications, which are directed towards a business model, and further outsourcing in the Cloud. In the former the Development of SOA applications, we take, as a baseline scenario, a business model from which to obtain a business process model. To this end, we use software engineering tools; and in the latter The Outsourcing we propose a guide that would facilitate uploading business models into the Cloud; to this end we describe a SOA governance model, which controls the SOA. Additionally we propose a Cloud government that integrates Service Level Agreements SLAs, plus SOA governance, and Cloud architecture. Finally we apply our methodology in an example illustrating our proposal. We believe that our proposal can be used as a guide/pattern for the development of business applications.
Resumo:
One of the key factors for a given application to take advantage of cloud computing is the ability to scale in an efficient, fast and reliable way. In centralized multi-party video conferencing, dynamically scaling a running conversation is a complex problem. In this paper we propose a methodology to divide the Multipoint Control Unit (the video conferencing server) into more simple units, broadcasters. Each broadcaster receives the media from a participant, processes it and forwards it to the rest. These broadcasters can be distributed among a group of CPUs. By using this methodology, video conferencing systems can scale in a more granular way, improving the deployment.
Resumo:
Real-world experimentation facilities accelerate the development of Future Internet technologies and services, advance the market for smart infrastructures, and increase the effectiveness of business processes through the Internet. The federation of facilities fosters the experimentation and innovation with larger and more powerful environment, increases the number and variety of the offered services and brings forth possibilities for new experimentation scenarios. This paper introduces a management solution for cloud federation that automates service provisioning to the largest possible extent, relieves the developers from time-consuming configuration settings, and caters for real-time information of all information related to the whole lifecycle of the provisioned services. This is achieved by proposing solutions to achieve the seamless deployment of services across the federation and ability of services to span across different infrastructures of the federation, as well as monitoring of the resources and data which can be aggregated with a common structure, offered as an open ecosystem for innovation at the developers' disposal. This solution consists of several federation management tools and components that are part of the work on Cloud Federation conducted within XIFI project to build the federation of cloud infrastructures for the Future Internet Lab (FIWARE Lab). We present the design and implementation of the solution-concerned FIWARE Lab management tools and components that are deployed within a federation of 17 cloud infrastructures distributed across Europe.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
as tecnologías emergentes como el cloud computing y los dispositivos móviles están creando una oportunidad sin precedentes para mejorar el sistema educativo, permitiendo tanto a los educadores personalizar y mejorar la experiencia de aprendizaje, como facilitar a los estudiantes que adquieran conocimientos sin importar dónde estén. Por otra parte, a través de técnicas de gamificacion será posible promover y motivar a los estudiantes a que aprendan materias arduas haciendo que la experiencia sea más motivadora. Los juegos móviles pueden ser el camino correcto para dar soporte a esta experiencia de aprendizaje mejorada. Este proyecto integra el diseño y desarrollo de una arquitectura en la nube altamente escalable y con alto rendimiento, así como el propio cliente de iOS, para dar soporte a una nueva version de Temporis, un juego móvil multijugador orientado a reordenar eventos históricos en una línea temporal (e.j. historia, arte, deportes, entretenimiento y literatura). Temporis actualmente está disponible en Google Play. Esta memoria describe el desarrollo de la nueva versión de Temporis (Temporis v.2.0) proporcionando detalles acerca de la mejora y adaptación basados en el Temporis original. En particular se describe el nuevo backend hecho en Go sobre Google App Engine creado para soportar miles de usuarios, asó como otras características por ejemplo como conseguir enviar noticaciones push desde la propia plataforma. Por último, el cliente de iOS en Temporis v.2.0 se ha desarrollado utilizando las últimas y más relevantes tecnologías, prestando especial atención a Swift (el lenguaje de programación nuevo de Apple, que es seguro y rápido), el Paradigma Funcional Reactivo (que ayuda a construir aplicaciones altamente interactivas además de a minimizar errores) y la arquitectura VIPER (una arquitectura que sigue los principios SOLID, se centra en la separación de asuntos y favorece la reutilización de código en otras plataformas). ABSTRACT Emerging technologies such as cloud computing and mobile devices are creating an unprecedented opportunity for enhancing the educational system, letting both educators customize and improve the learning experience, and students acquire knowledge regardless of where they are. Moreover, through gamification techniques it would be possible to encourage and motivate students to learn arduous subjects by making the experience more motivating. Mobile games can be a perfect vehicle to support this enhanced learning experience. This project integrates the design and development of a highly scalable and performant cloud architecture, as well as the iOS client that uses it, in order to provide support to a new version of Temporis, a mobile multiplayer game focused on ordering time-based (e.g. history, art, sports, entertainment and literature) in a timeline that currently is available on Google Play. This work describes the development of the new Temporis version (Temporis v.2.0), providing details about improvements and details on the adaptation of the original Temporis. In particular, the new Google App Engine backend is described, which was created to support thousand of users developed in Go language are provided, in addition to other features like how to achieve push notications in this platform. Finally, the mobile iOS client developed using the latest and more relevant technologies is explained paying special attention to Swift (Apple's new programming language, that is safe and fast), the Functional Reactive Paradigm (that helps building highly interactive apps while minimizing bugs) and the VIPER architecture (a SOLID architecture that enforces separation of concerns and makes it easy to reuse code for other platforms).
Resumo:
Cloud Agile Manufacturing is a new paradigm proposed in this article. The main objective of Cloud Agile Manufacturing is to offer industrial production systems as a service. Thus users can access any functionality available in the cloud of manufacturing (process design, production, management, business integration, factories virtualization, etc.) without knowledge — or at least without having to be experts — in managing the required resources. The proposal takes advantage of many of the benefits that can offer technologies and models like: Business Process Management (BPM), Cloud Computing, Service Oriented Architectures (SOA) and Ontologies. To develop the proposal has been taken as a starting point the Semantic Industrial Machinery as a Service (SIMaaS) proposed in previous work. This proposal facilitates the effective integration of industrial machinery in a computing environment, offering it as a network service. The work also includes an analysis of the benefits and disadvantages of the proposal.