724 resultados para cloud computing, accountability,SLA ,responsibility,security,privacy,trust
Resumo:
Da quando è iniziata l'era del Cloud Computing molte cose sono cambiate, ora è possibile ottenere un server in tempo reale e usare strumenti automatizzati per installarvi applicazioni. In questa tesi verrà descritto lo strumento MODDE (Model-Driven Deployment Engine), usato per il deployment automatico, partendo dal linguaggio ABS. ABS è un linguaggio a oggetti che permette di descrivere le classi in una maniera astratta. Ogni componente dichiarato in questo linguaggio ha dei valori e delle dipendenze. Poi si procede alla descrizione del linguaggio di specifica DDLang, col quale vengono espressi tutti i vincoli e le configurazioni finali. In seguito viene spiegata l’architettura di MODDE. Esso usa degli script che integrano i tool Zephyrus e Metis e crea un main ABS dai tre file passati in input, che serve per effettuare l’allocazione delle macchine in un Cloud. Inoltre verranno introdotti i due sotto-strumenti usati da MODDE: Zephyrus e Metis. Il primo si occupa di scegliere quali servizi installare tenendo conto di tutte le loro dipendenze, cercando di ottimizzare il risultato. Il secondo gestisce l’ordine con cui installarli tenendo conto dei loro stati interni e delle dipendenze. Con la collaborazione di questi componenti si ottiene una installazione automatica piuttosto efficace. Infine dopo aver spiegato il funzionamento di MODDE viene spiegato come integrarlo in un servizio web per renderlo disponibile agli utenti. Esso viene installato su un server HTTP Apache all’interno di un container di Docker.
Resumo:
Questo progetto di tesi è lo sviluppo di un sistema distribuito di acquisizione e visualizzazione interattiva di dati. Tale sistema è utilizzato al CERN (Organizzazione Europea per la Ricerca Nucleare) al fine di raccogliere i dati relativi al funzionamento dell'LHC (Large Hadron Collider, infrastruttura ove avvengono la maggior parte degli esperimenti condotti al CERN) e renderli disponibili al pubblico in tempo reale tramite una dashboard web user-friendly. L'infrastruttura sviluppata è basata su di un prototipo progettato ed implementato al CERN nel 2013. Questo prototipo è nato perché, dato che negli ultimi anni il CERN è diventato sempre più popolare presso il grande pubblico, si è sentita la necessità di rendere disponibili in tempo reale, ad un numero sempre maggiore di utenti esterni allo staff tecnico-scientifico, i dati relativi agli esperimenti effettuati e all'andamento dell'LHC. Le problematiche da affrontare per realizzare ciò riguardano sia i produttori dei dati, ovvero i dispositivi dell'LHC, sia i consumatori degli stessi, ovvero i client che vogliono accedere ai dati. Da un lato, i dispositivi di cui vogliamo esporre i dati sono sistemi critici che non devono essere sovraccaricati di richieste, che risiedono in una rete protetta ad accesso limitato ed utilizzano protocolli di comunicazione e formati dati eterogenei. Dall'altro lato, è necessario che l'accesso ai dati da parte degli utenti possa avvenire tramite un'interfaccia web (o dashboard web) ricca, interattiva, ma contemporaneamente semplice e leggera, fruibile anche da dispositivi mobili. Il sistema da noi sviluppato apporta miglioramenti significativi rispetto alle soluzioni precedentemente proposte per affrontare i problemi suddetti. In particolare presenta un'interfaccia utente costituita da diversi widget configurabili, riuitilizzabili che permettono di esportare i dati sia presentati graficamente sia in formato "machine readable". Un'alta novità introdotta è l'architettura dell'infrastruttura da noi sviluppata. Essa, dato che è basata su Hazelcast, è un'infrastruttura distribuita modulare e scalabile orizzontalmente. È infatti possibile inserire o rimuovere agenti per interfacciarsi con i dispositivi dell'LHC e web server per interfacciarsi con gli utenti in modo del tutto trasparente al sistema. Oltre a queste nuove funzionalità e possbilità, il nostro sistema, come si può leggere nella trattazione, fornisce molteplici spunti per interessanti sviluppi futuri.
Resumo:
Dall'analisi dei big data si possono trarre degli enormi benefici in svariati ambiti applicativi. Uno dei fattori principali che contribuisce alla ricchezza dei big data, consiste nell'uso non previsto a priori di dati immagazzinati in precedenza, anche in congiunzione con altri dataset eterogenei: questo permette di trovare correlazioni significative e inaspettate tra i dati. Proprio per questo, il Valore, che il dato potenzialmente porta con sè, stimola le organizzazioni a raccogliere e immagazzinare sempre più dati e a ricercare approcci innovativi e originali per effettuare analisi su di essi. L’uso fortemente innovativo che viene fatto dei big data in questo senso e i requisiti tecnologici richiesti per gestirli hanno aperto importanti problematiche in materia di sicurezza e privacy, tali da rendere inadeguati o difficilmente gestibili, gli strumenti di sicurezza utilizzati finora nei sistemi tradizionali. Con questo lavoro di tesi si intende analizzare molteplici aspetti della sicurezza in ambito big data e offrire un possibile approccio alla sicurezza dei dati. In primo luogo, la tesi si occupa di comprendere quali sono le principali minacce introdotte dai big data in ambito di privacy, valutando la fattibilità delle contromisure presenti all’attuale stato dell’arte. Tra queste anche il controllo dell’accesso ha riscontrato notevoli sfide causate dalle necessità richieste dai big data: questo elaborato analizza pregi e difetti del controllo dell’accesso basato su attributi (ABAC), un modello attualmente oggetto di discussione nel dibattito inerente sicurezza e privacy nei big data. Per rendere attuabile ABAC in un contesto big data, risulta necessario l’ausilio di un supporto per assegnare gli attributi di visibilità alle informazioni da proteggere. L’obiettivo di questa tesi consiste nel valutare fattibilità, caratteristiche significative e limiti del machine learning come possibile approccio di utilizzo.
Resumo:
The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.
Resumo:
La simulazione è definita come la rappresentazione del comportamento di un sistema o di un processo per mezzo del funzionamento di un altro o, alternativamente, dall'etimologia del verbo “simulare”, come la riproduzione di qualcosa di fittizio, irreale, come se in realtà, lo fosse. La simulazione ci permette di modellare la realtà ed esplorare soluzioni differenti e valutare sistemi che non possono essere realizzati per varie ragioni e, inoltre, effettuare differenti valutazioni, dinamiche per quanto concerne la variabilità delle condizioni. I modelli di simulazione possono raggiungere un grado di espressività estremamente elevato, difficilmente un solo calcolatore potrà soddisfare in tempi accettabili i risultati attesi. Una possibile soluzione, viste le tendenze tecnologiche dei nostri giorni, è incrementare la capacità computazionale tramite un’architettura distribuita (sfruttando, ad esempio, le possibilità offerte dal cloud computing). Questa tesi si concentrerà su questo ambito, correlandolo ad un altro argomento che sta guadagnando, giorno dopo giorno, sempre più rilevanza: l’anonimato online. I recenti fatti di cronaca hanno dimostrato quanto una rete pubblica, intrinsecamente insicura come l’attuale Internet, non sia adatta a mantenere il rispetto di confidenzialità, integrità ed, in alcuni, disponibilità degli asset da noi utilizzati: nell’ambito della distribuzione di risorse computazionali interagenti tra loro, non possiamo ignorare i concreti e molteplici rischi; in alcuni sensibili contesti di simulazione (e.g., simulazione militare, ricerca scientifica, etc.) non possiamo permetterci la diffusione non controllata dei nostri dati o, ancor peggio, la possibilità di subire un attacco alla disponibilità delle risorse coinvolte. Essere anonimi implica un aspetto estremamente rilevante: essere meno attaccabili, in quanto non identificabili.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
Abstract Cloud computing service emerged as an essential component of the Enterprise {IT} infrastructure. Migration towards a full range and large-scale convergence of Cloud and network services has become the current trend for addressing requirements of the Cloud environment. Our approach takes the infrastructure as a service paradigm to build converged virtual infrastructures, which allow offering tailored performance and enable multi-tenancy over a common physical infrastructure. Thanks to virtualization, new exploitation activities of the physical infrastructures may arise for both transport network and Data Centres services. This approach makes network and Data Centres’ resources dedicated to Cloud Computing to converge on the same flexible and scalable level. The work presented here is based on the automation of the virtual infrastructure provisioning service. On top of the virtual infrastructures, a coordinated operation and control of the different resources is performed with the objective of automatically tailoring connectivity services to the Cloud service dynamics. Furthermore, in order to support elasticity of the Cloud services through the optical network, dynamic re-planning features have been provided to the virtual infrastructure service, which allows scaling up or down existing virtual infrastructures to optimize resource utilisation and dynamically adapt to users’ demands. Thus, the dynamic re-planning of the service becomes key component for the coordination of Cloud and optical network resource in an optimal way in terms of resource utilisation. The presented work is complemented with a use case of the virtual infrastructure service being adopted in a distributed Enterprise Information System, that scales up and down as a function of the application requests.
Resumo:
Long Term Evolution (LTE) represents the fourth generation (4G) technology which is capable of providing high data rates as well as support of high speed mobility. The EU FP7 Mobile Cloud Networking (MCN) project integrates the use of cloud computing concepts in LTE mobile networks in order to increase LTE's performance. In this way a shared distributed virtualized LTE mobile network is built that can optimize the utilization of virtualized computing, storage and network resources and minimize communication delays. Two important features that can be used in such a virtualized system to improve its performance are the user mobility and bandwidth prediction. This paper introduces the architecture and challenges that are associated with user mobility and bandwidth prediction approaches in virtualized LTE systems.
Resumo:
Mobile networks usage rapidly increased over the years, with great consequences in terms of performance requirements. In this paper, we propose mechanisms to use Information-Centric Networking to perform load balancing in mobile networks, providing content delivery over multiple radio technologies at the same time and thus efficiently using resources and improving the overall performance of content transfer. Meaningful results were obtained by comparing content transfer over single radio links with typical strategies to content transfer over multiple radio links with Information-Centric Networking load balancing. Results demonstrate that Information-Centric Networking load balancing increases the performance and efficiency of 3GPP Long Term Evolution mobile networks while greatly improving the network perceived quality for end users.
Resumo:
Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.
Resumo:
The evolution of wireless access technologies and mobile devices, together with the constant demand for video services, has created new Human-Centric Multimedia Networking (HCMN) scenarios. However, HCMN poses several challenges for content creators and network providers to deliver multimedia data with an acceptable quality level based on the user experience. Moreover, human experience and context, as well as network information play an important role in adapting and optimizing video dissemination. In this paper, we discuss trends to provide video dissemination with Quality of Experience (QoE) support by integrating HCMN with cloud computing approaches. We identified five trends coming from such integration, namely Participatory Sensor Networks, Mobile Cloud Computing formation, QoE assessment, QoE management, and video or network adaptation.
Resumo:
Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach.
Resumo:
This paper proposes a new methodology focused on implementing cost effective architectures on Cloud Computing systems. With this methodology the paper presents some disadvantages of systems that are based on single Cloud architectures and gives some advices for taking into account in the development of hybrid systems. The work also includes a validation of these ideas implemented in a complete videoconference service developed with our research group. This service allows a great number of users per conference, multiple simultaneous conferences, different client software (requiring transcodification of audio and video flows) and provides a service like automatic recording. Furthermore it offers different kinds of connectivity including SIP clients and a client based on Web 2.0. The ideas proposed in this article are intended to be a useful resource for any researcher or developer who wants to implement cost effective systems on several Clouds
Resumo:
Cloud computing is one the most relevant computing paradigms available nowadays. Its adoption has increased during last years due to the large investment and research from business enterprises and academia institutions. Among all the services cloud providers usually offer, Infrastructure as a Service has reached its momentum for solving HPC problems in a more dynamic way without the need of expensive investments. The integration of a large number of providers is a major goal as it enables the improvement of the quality of the selected resources in terms of pricing, speed, redundancy, etc. In this paper, we propose a system architecture, based on semantic solutions, to build an interoperable scheduler for federated clouds that works with several IaaS (Infrastructure as a Service) providers in a uniform way. Based on this architecture we implement a proof-of-concept prototype and test it with two different cloud solutions to provide some experimental results about the viability of our approach.
Resumo:
European public administrations must manage citizens' digital identities, particularly considering interoperability among different countries. Owing to the diversity of electronic identity management (eIDM) systems, when users of one such system seek to communicate with governments using a different system, both systems must be linked and understand each other. To achieve this, the European Union is working on an interoperability framework. This article provides an overview of eIDM systems' current state at a pan-European level. It identifies and analyzes issues on which agreement exists, as well as those that aren't yet resolved and are preventing the adoption of a large-scale model.