828 resultados para cloud computing resources


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En la actualidad, el uso del Cloud Computing se está incrementando y existen muchos proveedores que ofrecen servicios que hacen uso de esta tecnología. Uno de ellos es Amazon Web Services, que a través de su servicio Amazon EC2, nos ofrece diferentes tipos de instancias que podemos utilizar según nuestras necesidades. El modelo de negocio de AWS se basa en el pago por uso, es decir, solo realizamos el pago por el tiempo que se utilicen las instancias. En este trabajo se implementa en Amazon EC2, una aplicación cuyo objetivo es extraer de diferentes fuentes de información, los datos de las ventas realizadas por las editoriales y librerías de España. Estos datos son procesados, cargados en una base de datos y con ellos se generan reportes estadísticos, que ayudarán a los clientes a tomar mejores decisiones. Debido a que la aplicación procesa una gran cantidad de datos, se propone el desarrollo y validación de un modelo, que nos permita obtener una ejecución óptima en Amazon EC2. En este modelo se tienen en cuenta el tiempo de ejecución, el coste por uso y una métrica de coste/rendimiento. Adicionalmente, se utilizará la tecnología de contenedores Docker para llevar a cabo un caso específico del despliegue de la aplicación.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This talk, which is based on our newest findings and experiences from research and industrial projects, addresses one of the most relevant challenges for a decade to come: How to integrate the Internet of Things with software, people, and processes, considering modern Cloud Computing and Elasticity principles. Elasticity is seen as one of the main characteristics of Cloud Computing today. Is elasticity simply scalability on steroids? This talk addresses the main principles of elasticity, presents a fresh look at this problem, and examines how to integrate people, software services, and things into one composite system, which can be modeled, programmed, and deployed on a large scale in an elastic way. This novel paradigm has major consequences on how we view, build, design, and deploy ultra-large scale distributed systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Part 18: Optimization in Collaborative Networks

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Part 12: Collaboration Platforms

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The diversity in the way cloud providers o↵er their services, give their SLAs, present their QoS, or support di↵erent technologies, makes very difficult the portability and interoperability of cloud applications, and favours the well-known vendor lock-in problem. We propose a model to describe cloud applications and the required resources in an agnostic, and providers- and resources-independent way, in which individual application modules, and entire applications, may be re-deployed using different services without modification. To support this model, and after the proposal of a variety of cross-cloud application management tools by different authors, we propose going one step further in the unification of cloud services with a management approach in which IaaS and PaaS services are integrated into a unified interface. We provide support for deploying applications whose components are distributed on different cloud providers, indistinctly using IaaS and PaaS services.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current infrastructure as a service (IaaS) cloud systems, allow users to load their own virtual machines. However, most of these systems do not provide users with an automatic mechanism to load a network topology of virtual machines. In order to specify and implement the network topology, we use software switches and routers as network elements. Before running a group of virtual machines, the user needs to set up the system once to specify a network topology of virtual machines. Then, given the user’s request for running a specific topology, our system loads the appropriate virtual machines (VMs) and also runs separated VMs as software switches and routers. Furthermore, we have developed a manager that handles physical hardware failure situations. This system has been designed in order to allow users to use the system without knowing all the internal technical details.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Surgical interventions are usually performed in an operation room; however, access to the information by the medical team members during the intervention is limited. While in conversations with the medical staff, we observed that they attach significant importance to the improvement of the information and communication direct access by queries during the process in real time. It is due to the fact that the procedure is rather slow and there is lack of interaction with the systems in the operation room. These systems can be integrated on the Cloud adding new functionalities to the existing systems the medical expedients are processed. Therefore, such a communication system needs to be built upon the information and interaction access specifically designed and developed to aid the medical specialists. Copyright 2014 ACM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents the study and experimental tests for the viability analysis of using multiple wireless technologies in urban traffic light controllers in a Smart City environment. Communication drivers, different types of antennas, data acquisition methods and data processing for monitoring the network are presented. The sensors and actuators modules are connected in a local area network through two distinct low power wireless networks using both 868 MHz and 2.4 GHz frequency bands. All data communications using 868 MHz go through a Moteino. Various tests are made to assess the most advantageous features of each communication type. The experimental results show better range for 868 MHz solutions, whereas the 2.4 GHz presents the advantage of self-regenerating the network and mesh. The different pros and cons of both communication methods are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I dati sono una risorsa di valore inestimabile per tutte le organizzazioni. Queste informazioni vanno da una parte gestite tramite i classici sistemi operazionali, dall’altra parte analizzate per ottenere approfondimenti che possano guidare le scelte di business. Uno degli strumenti fondamentali a supporto delle scelte di business è il data warehouse. Questo elaborato è il frutto di un percorso di tirocinio svolto con l'azienda Injenia S.r.l. Il focus del percorso era rivolto all'ottimizzazione di un data warehouse che l'azienda vende come modulo aggiuntivo di un software di nome Interacta. Questo data warehouse, Interacta Analytics, ha espresso nel tempo notevoli criticità architetturali e di performance. L’architettura attualmente usata per la creazione e la gestione dei dati all'interno di Interacta Analytics utilizza un approccio batch, pertanto, l’obiettivo cardine dello studio è quello di trovare soluzioni alternative batch che garantiscano un risparmio sia in termini economici che di tempo, esplorando anche la possibilità di una transizione ad un’architettura streaming. Gli strumenti da utilizzare in questa ricerca dovevano inoltre mantenersi in linea con le tecnologie utilizzate per Interacta, ossia i servizi della Google Cloud Platform. Dopo una breve dissertazione sul background teorico di questa area tematica, l'elaborato si concentra sul funzionamento del software principale e sulla struttura logica del modulo di analisi. Infine, si espone il lavoro sperimentale, innanzitutto proponendo un'analisi delle criticità principali del sistema as-is, dopodiché ipotizzando e valutando quattro ipotesi migliorative batch e due streaming. Queste, come viene espresso nelle conclusioni della ricerca, migliorano di molto le performance del sistema di analisi in termini di tempistiche di elaborazione, di costo totale e di semplicità dell'architettura, in particolare grazie all'utilizzo dei servizi serverless con container e FaaS della piattaforma cloud di Google.