877 resultados para Cloud Computing Modelli di Business


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growing demand for large-scale virtualization environments, such as the ones used in cloud computing, has led to a need for efficient management of computing resources. RAM memory is the one of the most required resources in these environments, and is usually the main factor limiting the number of virtual machines that can run on the physical host. Recently, hypervisors have brought mechanisms for transparent memory sharing between virtual machines in order to reduce the total demand for system memory. These mechanisms “merge” similar pages detected in multiple virtual machines into the same physical memory, using a copy-on-write mechanism in a manner that is transparent to the guest systems. The objective of this study is to present an overview of these mechanisms and also evaluate their performance and effectiveness. The results of two popular hypervisors (VMware and KVM) using different guest operating systems (Linux and Windows) and different workloads (synthetic and real) are presented herein. The results show significant performance differences between hypervisors according to the guest system workloads and execution time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The air-sea flux of greenhouse gases (e.g. carbon dioxide, CO2) is a critical part of the climate system and a major factor in the biogeochemical development of the oceans. More accurate and higher resolution calculations of these gas fluxes are required if we are to fully understand and predict our future climate. Satellite Earth observation is able to provide large spatial scale datasets that can be used to study gas fluxes. However, the large storage requirements needed to host such data can restrict its use by the scientific community. Fortunately, the development of cloud-computing can provide a solution. Here we describe an open source air-sea CO2 flux processing toolbox called the ‘FluxEngine’, designed for use on a cloud-computing infrastructure. The toolbox allows users to easily generate global and regional air-sea CO2 flux data from model, in situ and Earth observation data, and its air-sea gas flux calculation is user configurable. Its current installation on the Nephalae cloud allows users to easily exploit more than 8 terabytes of climate-quality Earth observation data for the derivation of gas fluxes. The resultant NetCDF data output files contain >20 data layers containing the various stages of the flux calculation along with process indicator layers to aid interpretation of the data. This paper describes the toolbox design, the verification of the air-sea CO2 flux calculations, demonstrates the use of the tools for studying global and shelf-sea air-sea fluxes and describes future developments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The big data era has dramatically transformed our lives; however, security incidents such as data breaches can put sensitive data (e.g. photos, identities, genomes) at risk. To protect users' data privacy, there is a growing interest in building secure cloud computing systems, which keep sensitive data inputs hidden, even from computation providers. Conceptually, secure cloud computing systems leverage cryptographic techniques (e.g., secure multiparty computation) and trusted hardware (e.g. secure processors) to instantiate a “secure” abstract machine consisting of a CPU and encrypted memory, so that an adversary cannot learn information through either the computation within the CPU or the data in the memory. Unfortunately, evidence has shown that side channels (e.g. memory accesses, timing, and termination) in such a “secure” abstract machine may potentially leak highly sensitive information, including cryptographic keys that form the root of trust for the secure systems. This thesis broadly expands the investigation of a research direction called trace oblivious computation, where programming language techniques are employed to prevent side channel information leakage. We demonstrate the feasibility of trace oblivious computation, by formalizing and building several systems, including GhostRider, which is a hardware-software co-design to provide a hardware-based trace oblivious computing solution, SCVM, which is an automatic RAM-model secure computation system, and ObliVM, which is a programming framework to facilitate programmers to develop applications. All of these systems enjoy formal security guarantees while demonstrating a better performance than prior systems, by one to several orders of magnitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract It was in the first decade of the twentieth century that the first white china Factory was implemented in Brazil. Fruit of the association between the Sao Paulo Aristocracy and the Italian Romeo Ranzini, this factory was responsible for producing significant amounts of crockery in industrial moulds in sao Paulo, Brazil. It was also the first factory to produce decorative tiles that would be part of the architecture of the public buildings built between 1919 and 1922 in Commemoration of the Centennial of the Brazilian Independence. Known as The Santa Catharina Factory, this factory was inaugurated in 1913 with the participation of Italian immigrants and German technologies for the development of its first manufacturing activities. As a result of a number of economic, political and social matters that started in the previous century in the city of Sao Paulo, The Santa Catharina factory played an important role in industrial development as regards the production of national white china and was used as a model for the construction of new ceramic factories in Sao Paulo. After acquired by Matarazzo industries in 1927, had closed their activities in 1937. This research is based on the identification and analysis of the first tiles produced in Brazil by the Santa Catharina Factory, which were part of the architectural decorations of the buildings built in Sao Paulo to the celebration of the Centennial of the Brazilian Independence. Designed by Victor Dubugras, The Largo da Memoria (located in the city of Sao Paulo) and the buildings located in the "Paths of the Sea" road marked the beginning of Brazilian industrialization and the emergence of Neocolonial Movement in architecture of Sao Paulo. Studies of the first national patterns of decorative tiles approach a subject poorly researched by experts in tiled studies in Brazil, although in this case these tiles have represented not only an important milestone in the national industrialization, but also have demarcated the significant changes in architectural and decorative practices in the country in the early twentieth century; RESUMÈ: C'est durant la premiere decennie du XXe siecle que la premiere usine de porcelaine blanche fut implant& au Bresil. Elle fut le fruit de l'association entre l'aristocratie de Sao Paulo et l'italien Romeo Ranzini. L'usine produisait une quantite signifiante de porcelaine sur le territoire industriel de Sao Paulo. Ce fut egalement la premiere usine a produire des carreaux decoratifs qui sont aujourd'hui visibles dans l'architecture des batiments publics construit entre 1919 et 1922, pour la commemoration du centenaire de l'independance bresilienne. Connue sous le nom de Santa Catharina, cette usine fut inaugure en 1912. Elle fut construite par des émigrés Italiens, et utilisa pour la technologie allemande pour so production. En tant que resultat d'un certain nombre de questions economiques, politiques et sociales qui ont &butes durant le siecle precedent dans la ville de Sao Paulo, l'usine Santa Catharina a joue un role important dans le developpement industriel de la production de porcelaine blanche nationale et a ete utilise comme modele pour la construction de nouvelles usines de ceramique a Sao Paulo. Apres avoir ete achete par l'industrie Matarazzo en 1927, elle cessa ses activites en 1937. Cette recherche est basee sur l'identification et l'analyse des premiers carreaux decoratifs fabriques au Bresil par l'usine Santa Catharina, qui etait une partie des decorations architecturales des batiments construits a Sao Paulo pour la celebration du centenaire de l'Independance Bresilienne. Connue par Victor Dubugras, le "Largo da Memoria" (situe dans la ville de Sao Paulo), et les batiments situes sur le "Path of the Sea", ont marque le debut de l'industrialisation bresilienne et l'emergence d'un mouvement neocolonialiste dans l'architecture de Sao Paolo. L'etude des premiers modeles nationaux de carreaux decoratifs est un sujet peut etudie par les experts bresiliens, bien qu'ils furent un jalon importante pour l'industrialisation nationale. Its ont egalement entrains des changements importants dans les pratiques architecturales, et decoratives au sein du pays au XXe siecle. Mots-cles: Ceramique - carreaux decoratifs — L'usine Santa Catharina, Bresil - Production de carreaux; RIASSUNTO: Nel primo decennio del Novecento vide luce la prima fabbrica di ceramica di porcellana in Brasile. Frutto dell'associazione tra l'aristocrazia Paulista e l'italiano Romeo Ranzini, questa fabbrica fu responsabile della produzione di notevoli quantita di ceramica di porcellana mediante stampi industriali nella citta di San Paolo, Brasile. Fu anche la prima fabbrica a produrre azulejos che avrebbero poi fatto parte dell'architettura degli edifici pubblici costruiti tra it 1919 ed it 1922, per la commemorazione del Centenario dell'indipendenza Brasiliana. Conosciuta come Fabbrica di Santa Catharina, questa fu inaugurata nel 1913, con la partecipazione di immigrati italiani e con l'impiego di tecnologie tedesche per lo sviluppo delle sue prime attivita produttive. Risultato di una serie di cambiamenti economici, politici e sociali, che ebbero inizio nel secolo precedente nella citta di San Paolo, la Fabbrica di Santa Catharina svolse un ruolo importante nello sviluppo industriale per quanto riguarda la produzione di ceramica di porcellana nazionale e fu adottata come modello per la costruzione di nuove fabbriche a San Paolo. Successivamente, fu acquisita dalle industrie Matarazzo nel 1927, vedendo poi chiudersi le sue attivita nel 1937. Questa ricerca si basa sull'identificazione e l'analisi dei primi azulejos prodotti in Brasile dalla Fabbrica di Santa Catharina che fecero parte delle decorazioni architettoniche degli edifici costruiti a San Paolo per la commemorazione del Centenario dell'indipendenza Brasiliana. Progettati da Victor Dubugras, it Largo da Mem(Via (situato nella citta di San Paolo) e gli edifici che si trovano nei Caminhos do Mar marcarono l'inizio dell'industrializzazione brasiliana e la nascita del Movimento Neocolonial dell'architettura Paulista. Gli studi dei primi modelli di azulejos nazionali affrontano un argomento poco studiato dagli esperti in azulejaria in Brasile, nonostante rappresentino un importante avvenimento dell'industrializzazione nazionale, ma segnano anche i cambiamenti di significative pratiche architettoniche e decorative nel Paese nel primo Novecento. Parole chiave: Ceramica - porcellana - La fabbrica di Santa Catharina - Produzione di ceramica .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con agrado, se presenta un nuevo número de la Revista Bibliotecas. Incluye investigaciones de interés acerca de los siguientes temas: arquetipos bibliotecarios, alfabetización informacional e informática en la nube. En el artículo dedicado a los arquetipos, el autor afirma que la profesión bibliotecológica, al igual que ocurre con otras disciplinas, está expuesta a una serie de estereotipos. Tales impresiones y creencias provocan que las personas, motivadas por el prejuicio, no incursionen en una fascinante disciplina; como consecuencia, inciden en la matrícula de estudiantes de primer ingreso. Este escrito expone la situación de la Escuela de Bibliotecología, Documentación e Información, de la Universidad Nacional, y plantea una propuesta didáctica en aras de mejorar la concepción de la bibliotecología entre la población estudiantil de secundaria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Descreve-se, no presente trabalho, os esforços envidados no sentido de criar uma solução informática generalista, para os problemas mais recorrentes do processo de produção de videojogos 20, baseados em sprites, a correr em plataformas móveis. O sistema desenvolvido é uma aplicação web que está inserida no paradigma cloud­computing, usufruindo, portanto, de todas as vantagens em termos de acessibilidade, segurança da informação e manutenção que este paradigma oferece actualmente. Além das questões funcionais, a aplicação é ainda explorada do ponto de vista da arquitetura da implementação, com vista a garantir um sistema com implementação escalável, adaptável e de fácil manutenção. Propõe-se ainda um algoritmo que foi desenvolvido para resolver o problema de obter uma distribuição espacial otimizada de várias áreas retangulares, sem sobreposições nem restrições a nível das dimensões, quer do arranjo final, quer das áreas arranjadas. ABSTRACT: This document describes the efforts taken to create a generic computing solution for the most recurrent problems found in the production of two dimensional, sprite­based videogames, running on mobile platforms. The developed system is a web application that fits within the scope of the recent cloud-computing paradigm and, therefore, enjoys all of its advantages in terms of data safety, accessibility and application maintainability. In addition, to the functional issues, the system is also studied in terms of its internal software architecture, since it was planned and implemented in the perspective of attaining an easy to maintain application, that is both scalable and adaptable. Furthermore, it is also proposed an algorithm that aims to find an optimized solution to the space distribution problem of several rectangular areas, with no overlapping and no dimensinal restrictions, neither on the final arrangement nor on the arranged areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

117 p.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. The present paper aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well-defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queuing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results in this paper provide new theoretical insights into power management and performance optimization in data centers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the development of the computer, user orientated innovations such as graphical operating systems, mice, and mobile devices have made computing ubiquitous in modern society. The cloud is the next step in this process. Through the cloud, computing has undergone co modification and has been made available as a utility. However, in comparison to other commodities such as water and electricity, clouds (in particular IaaS and PaaS) have not reached the same penetration into the global market. We propose that through further abstraction, future clouds will be ubiquitous and transparent, made accessible to ordinary users and integrated into all aspects of society. This paper presents a concept and path to this ubiquitous and transparent cloud, accessible by the masses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Outsourcing heavy computational tasks to remote cloud server, which accordingly significantly reduce the computational burden at the end hosts, represents an effective and practical approach towards extensive and scalable mobile applications and has drawn increasing attention in recent years. However, due to the limited processing power of the end hosts yet the keen privacy concerns on the outsourced data, it is vital to ensure both the efficiency and security of the outsourcing computation in the cloud computing. In this paper, we address the issue by developing a publicly verifiable outsourcing computation proposal. In particular, considering a large amount of applications of matrix multiplication in large datasets and image processing, we propose a publicly verifiable outsourcing computation scheme for matrix multiplication in the amortized model. Security analysis demonstrates that the proposed scheme is provable secure by blinding input and output in a simple way. By comparing the developed scheme with existing proposals, we show that our proposal is more efficient in terms of functionality, as well as the computation, communication and storage overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Big Data technologies are exciting cutting-edge technologies that generate, collect, store and analyse tremendous amount of data. Like any other IT revolution, Big Data technologies also have big challenges that are obstructing it to be adopted by wider community or perhaps impeding to extract value from Big Data with pace and accuracy it is promising. In this paper we first offer an alternative view of «Big Data Cloud» with the main aim to make this complex technology easy to understand for new researchers and identify gaps efficiently. In our lab experiment, we have successfully implemented cyber-attacks on Apache Hadoop's management interface «Ambari». On our thought about «attackers only need one way in», we have attacked the Apache Hadoop's management interface, successfully turned down all communication between Ambari and Hadoop's ecosystem and collected performance data from Ambari Virtual Machine (VM) and Big Data Cloud hypervisor. We have also detected these cyber-attacks with 94.0187% accurateness using modern machine learning algorithms. From the existing researchs, no one has ever attempted similar experimentation in detection of cyber-attacks on Hadoop using performance data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is almost impossible to prove that a given software system achieves an absolute security level. This becomes more complicated when addressing multi-tenant cloud-based SaaS applications. Developing practical security properties and metrics to monitor, verify, and assess the behavior of such software systems is a feasible alternative to such problem. However, existing efforts focus either on verifying security properties or security metrics but not both. Moreover, they are either hard to adopt, in terms of usability, or require design-time preparation to support monitoring of such security metrics and properties which is not feasible for SaaS applications. In this paper, we introduce, to the best of our knowledge, the first unified monitoring platform that enables SaaS application tenants to specify, at run-time, security metrics and properties without design-time preparation and hence increases tenants’ trust of their cloud-assets security. The platform automatically converts security metrics and properties specifications into security probes and integrates them with the target SaaS application at run-time. Probes-generated measurements are fed into an analysis component that verifies the specified properties and calculates security metrics’ values using aggregation functions. This is then reported to SaaS tenants and cloud platform security engineers. We evaluated our platform expressiveness and usability, soundness, and performance overhead.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Because of the strong demands of physical resources of big data, it is an effective and efficient way to store and process big data in clouds, as cloud computing allows on-demand resource provisioning. With the increasing requirements for the resources provisioned by cloud platforms, the Quality of Service (QoS) of cloud services for big data management is becoming significantly important. Big data has the character of sparseness, which leads to frequent data accessing and processing, and thereby causes huge amount of energy consumption. Energy cost plays a key role in determining the price of a service and should be treated as a first-class citizen as other QoS metrics, because energy saving services can achieve cheaper service prices and environmentally friendly solutions. However, it is still a challenge to efficiently schedule Virtual Machines (VMs) for service QoS enhancement in an energy-aware manner. In this paper, we propose an energy-aware dynamic VM scheduling method for QoS enhancement in clouds over big data to address the above challenge. Specifically, the method consists of two main VM migration phases where computation tasks are migrated to servers with lower energy consumption or higher performance to reduce service prices and execution time. Extensive experimental evaluation demonstrates the effectiveness and efficiency of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The popularity of cloud computing has led to a dramatic increase in the number of data centers in the world. The ever-increasing computational demands along with the slowdown in technology scaling has ushered an era of power-limited servers. Techniques such as near-threshold computing (NTC) can be used to improve energy efficiency in the post-Dennard scaling era. This paper describes an architecture based on the FD-SOI process technology for near-threshold operation in servers. Our work explores the trade-offs in energy and performance when running a wide range of applications found in private and public clouds, ranging from traditional scale-out applications, such as web search or media streaming, to virtualized banking applications. Our study demonstrates the benefits of near-threshold operation and proposes several directions to synergistically increase the energy proportionality of a near-threshold server.