968 resultados para Cloud Storage Google
Resumo:
Il Cloud Storage è un modello di conservazione dati su computer in rete, dove i dati stessi sono memorizzati su molteplici server, reali e/o virtuali, generalmente ospitati presso strutture di terze parti o su server dedicati. Tramite questo modello è possibile accedere alle informazioni personali o aziendali, siano essi video, fotografie, musica, database o file in maniera “smaterializzata”, senza conoscere l’ubicazione fisica dei dati, da qualsiasi parte del mondo, con un qualsiasi dispositivo adeguato. I vantaggi di questa metodologia sono molteplici: infinita capacita’ di spazio di memoria, pagamento solo dell’effettiva quantità di memoria utilizzata, file accessibili da qualunque parte del mondo, manutenzione estremamente ridotta e maggiore sicurezza in quanto i file sono protetti da furto, fuoco o danni che potrebbero avvenire su computer locali. Google Cloud Storage cade in questa categoria: è un servizio per sviluppatori fornito da Google che permette di salvare e manipolare dati direttamente sull’infrastruttura di Google. In maggior dettaglio, Google Cloud Storage fornisce un’interfaccia di programmazione che fa uso di semplici richieste HTTP per eseguire operazioni sulla propria infrastruttura. Esempi di operazioni ammissibili sono: upload di un file, download di un file, eliminazione di un file, ottenere la lista dei file oppure la dimensione di un dato file. Ogniuna di queste richieste HTTP incapsula l’informazione sul metodo utilizzato (il tipo di richista, come GET, PUT, ...) e un’informazione di “portata” (la risorsa su cui effettuare la richiesta). Ne segue che diventa possibile la creazione di un’applicazione che, facendo uso di queste richieste HTTP, fornisce un servizio di Cloud Storage (in cui le applicazioni salvano dati in remoto generalmene attraverso dei server di terze parti). In questa tesi, dopo aver analizzato tutti i dettagli del servizio Google Cloud Storage, è stata implementata un’applicazione, chiamata iHD, che fa uso di quest’ultimo servizio per salvare, manipolare e condividere dati in remoto (nel “cloud”). Operazioni comuni di questa applicazione permettono di condividere cartelle tra più utenti iscritti al servizio, eseguire operazioni di upload e download di file, eliminare cartelle o file ed infine creare cartelle. L’esigenza di un’appliazione di questo tipo è nata da un forte incremento, sul merato della telefonia mobile, di dispositivi con tecnologie e con funzioni sempre più legate ad Internet ed alla connettività che esso offre. La tesi presenta anche una descrizione delle fasi di progettazione e implementazione riguardanti l’applicazione iHD. Nella fase di progettazione si sono analizzati tutti i requisiti funzionali e non funzionali dell’applicazione ed infine tutti i moduli da cui è composta quest’ultima. Infine, per quanto riguarda la fase di implementazione, la tesi presenta tutte le classi ed i rispettivi metodi presenti per ogni modulo, ed in alcuni casi anche come queste classi sono state effettivamente implementate nel linguaggio di programmazione utilizzato.
Resumo:
We investigate existing cloud storage schemes and identify limitations in each one based on the security services that they provide. We then propose a new cloud storage architecture that extends CloudProof of Popa et al. to provide availability assurance. This is accomplished by incorporating a proof of storage protocol. As a result, we obtain the first secure storage cloud computing scheme that furnishes all three properties of availability, fairness and freshness.
Resumo:
There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.
Resumo:
Physical location of data in cloud storage is an increasingly urgent problem. In a short time, it has evolved from the concern of a few regulated businesses to an important consideration for many cloud storage users. One of the characteristics of cloud storage is fluid transfer of data both within and among the data centres of a cloud provider. However, this has weakened the guarantees with respect to control over data replicas, protection of data in transit and physical location of data. This paper addresses the lack of reliable solutions for data placement control in cloud storage systems. We analyse the currently available solutions and identify their shortcomings. Furthermore, we describe a high-level architecture for a trusted, geolocation-based mechanism for data placement control in distributed cloud storage systems, which are the basis of an on-going work to define the detailed protocol and a prototype of such a solution. This mechanism aims to provide granular control over the capabilities of tenants to access data placed on geographically dispersed storage units comprising the cloud storage.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
In a general purpose cloud system efficiencies are yet to be had from supporting diverse applications and their requirements within a storage system used for a private cloud. Supporting such diverse requirements poses a significant challenge in a storage system that supports fine grained configuration on a variety of parameters. This paper uses the Ceph distributed file system, and in particular its global parameters, to show how a single changed parameter can effect the performance for a range of access patterns when tested with an OpenStack cloud system.
Resumo:
The main theme of this thesis is to allow the users of cloud services to outsource their data without the need to trust the cloud provider. The method is based on combining existing proof-of-storage schemes with distance-bounding protocols. Specifically, cloud customers will be able to verify the confidentiality, integrity, availability, fairness (or mutual non-repudiation), data freshness, geographic assurance and replication of their stored data directly, without having to rely on the word of the cloud provider.
Resumo:
Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
A patient-centric DRM approach is proposed for protecting privacy of health records stored in a cloud storage based on the patient's preferences and without the need to trust the service provider. Contrary to the current server-side access control solutions, this approach protects the privacy of records from the service provider, and also controls the usage of data after it is released to an authorized user.
Resumo:
The geographic location of cloud data storage centres is an important issue for many organisations and individuals due to various regulations that require data and operations to reside in specific geographic locations. Thus, cloud users may want to be sure that their stored data have not been relocated into unknown geographic regions that may compromise the security of their stored data. Albeshri et al. (2012) combined proof of storage (POS) protocols with distance-bounding protocols to address this problem. However, their scheme involves unnecessary delay when utilising typical POS schemes due to computational overhead at the server side. The aim of this paper is to improve the basic GeoProof protocol by reducing the computation overhead at the server side. We show how this can maintain the same level of security while achieving more accurate geographic assurance.
Resumo:
Virtualization is one of the key enabling technologies for Cloud computing. Although it facilitates improved utilization of resources, virtualization can lead to performance degradation due to the sharing of physical resources like CPU, memory, network interfaces, disk controllers, etc. Multi-tenancy can cause highly unpredictable performance for concurrent I/O applications running inside virtual machines that share local disk storage in Cloud. Disk I/O requests in a typical Cloud setup may have varied requirements in terms of latency and throughput as they arise from a range of heterogeneous applications having diverse performance goals. This necessitates providing differential performance services to different I/O applications. In this paper, we present PriDyn, a novel scheduling framework which is designed to consider I/O performance metrics of applications such as acceptable latency and convert them to an appropriate priority value for disk access based on the current system state. This framework aims to provide differentiated I/O service to various applications and ensures predictable performance for critical applications in multi-tenant Cloud environment. We demonstrate through experimental validations on real world I/O traces that this framework achieves appreciable enhancements in I/O performance, indicating that this approach is a promising step towards enabling QoS guarantees on Cloud storage.
Resumo:
Cloud services are becoming ever more important for everyone's life. Cloud storage? Web mails? Yes, we don't need to be working in big IT companies to be surrounded by cloud services. Another thing that's growing in importance, or at least that should be considered ever more important, is the concept of privacy. The more we rely on services of which we know close to nothing about, the more we should be worried about our privacy. In this work, I will analyze a prototype software based on a peer to peer architecture for the offering of cloud services, to see if it's possible to make it completely anonymous, meaning that not only the users using it will be anonymous, but also the Peers composing it will not know the real identity of each others. To make it possible, I will make use of anonymizing networks like Tor. I will start by studying the state of art of Cloud Computing, by looking at some real example, followed by analyzing the architecture of the prototype, trying to expose the differences between its distributed nature and the somehow centralized solutions offered by the famous vendors. After that, I will get as deep as possible into the working principle of the anonymizing networks, because they are not something that can just be 'applied' mindlessly. Some de-anonymizing techniques are very subtle so things must be studied carefully. I will then implement the required changes, and test the new anonymized prototype to see how its performances differ from those of the standard one. The prototype will be run on many machines, orchestrated by a tester script that will automatically start, stop and do all the required API calls. As to where to find all these machines, I will make use of Amazon EC2 cloud services and their on-demand instances.
Resumo:
Questa tesi si prefigge l’obiettivo di analizzare alcuni aspetti critici della sicurezza in ambito cloud. In particolare, i problemi legati alla privacy, dai termini di utilizzo alla sicurezza dei dati personali più o meno sensibili. L’aumento esponenziale di dati memorizzati nei sistemi di cloud storage (es. Dropbox, Amazon S3) pone il problema della sensibilità dei dati su un piano tutt'altro che banale, dovuto anche a non ben chiare politiche di utilizzo dei dati, sia in termini di cessione degli stessi a società di terze parti, sia per quanto riguarda le responsabilità legali. Questa tesi cerca di approfondire ed esaminare le mancanze più preoccupanti degli stessi. Oltre ad analizzare le principali criticità e i punti deboli dei servizi cloud, l’obiettivo di questo lavoro sarà quello di fare chiarezza sui passi e le infrastrutture che alcune aziende (es. Amazon) hanno implementato per avvicinarsi all’idea di 'safeness' nel cloud. Infine, l’ultimo obiettivo posto sarà l’individuazione di criteri per la valutazione/misura del grado di fiducia che l’utente può porre in questo contesto, distinguendo diversi criteri per classi di utenti. La tesi è strutturata in 4 capitoli: nel primo sarà effettuata una tassonomia dei problemi presenti nei sistemi cloud. Verranno presentati anche alcuni avvenimenti della storia recente, in cui queste problematiche sono affiorate. Nel secondo capitolo saranno trattate le strategie di 'safeness' adottate da alcune aziende, in ambito cloud. Inoltre, saranno presentate alcune possibili soluzioni, dal punto di vista architetturale. Si vedrà come il ruolo dell'utente sarà di estrema importanza. Il terzo capitolo sarà incentrato sulla ricerca di strumenti e metodi di valutazione che un utente, o gruppo di utenti, può utilizzare nei confronti di questi sistemi. Infine, il quarto capitolo conterrà alcune considerazioni conlusive sul lavoro svolto e sui possibili sviluppi di questa tesi.