970 resultados para cloud service pricing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

An extensive investigative survey on Cloud Computing with the main focus on gaps that is slowing down Cloud adoption as well as reviewing the threat remediation challenges. Some experimentally supported thoughts on novel approaches to address some of the widely discussed cyber-attack types using machine learning techniques. The thoughts have been constructed in such a way so that Cloud customers can detect the cyber-attacks in their VM without much help from Cloud service provider

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many scientific workflows are data intensive where large volumes of intermediate data are generated during their execution. Some valuable intermediate data need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science in the cloud has become popular nowadays, more intermediate data can be stored in scientific cloud workflows based on a pay-for-use model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenance in scientific workflows. With the IDG, deleted intermediate data can be regenerated, and as such we develop a novel intermediate data storage strategy that can reduce the cost of scientific cloud workflow systems by automatically storing appropriate intermediate data sets with one cloud service provider. The strategy has significant research merits, i.e. it achieves a cost-effective trade-off of computation cost and storage cost and is not strongly impacted by the forecasting inaccuracy of data sets' usages. Meanwhile, the strategy also takes the users' tolerance of data accessing delay into consideration. We utilize Amazon's cost model and apply the strategy to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that our strategy can reduce the overall cost of scientific cloud workflow execution significantly.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis describes the design and implementation of a situation awareness application. The application gathers data from sensors including accelerometers for monitoring earthquakes, carbon monoxide sensors for monitoring fires, radiation detectors, and dust sensors. The application also gathers Internet data sources including data about traffic congestion on daily commute routes, information about hazards, news relevant to the user of the application, and weather. The application sends the data to a Cloud computing service which aggregates data streams from multiple sites and detects anomalies. Information from the Cloud service is then displayed by the application on a tablet, computer monitor, or television screen. The situation awareness application enables almost all members of a community to remain aware of critical changes in their environments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud computing technology has rapidly evolved over the last decade, offering an alternative way to store and work with large amounts of data. However data security remains an important issue particularly when using a public cloud service provider. The recent area of homomorphic cryptography allows computation on encrypted data, which would allow users to ensure data privacy on the cloud and increase the potential market for cloud computing. A significant amount of research on homomorphic cryptography appeared in the literature over the last few years; yet the performance of existing implementations of encryption schemes remains unsuitable for real time applications. One way this limitation is being addressed is through the use of graphics processing units (GPUs) and field programmable gate arrays (FPGAs) for implementations of homomorphic encryption schemes. This review presents the current state of the art in this promising new area of research and highlights the interesting remaining open problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the recent technological development, we have been witnessing a progressive loss of control over our personal information. Whether it is the speed in which it spreads over the internet or the permanent storage of information on cloud services, the means by which our personal information escapes our control are vast. Inevitably, this situation allowed serious violations of personal rights. The necessity to reform the European policy for protection of personal information is emerging, in order to adapt to the technological era we live in. Granting individuals the ability to delete their personal information, mainly the information which is available on the Internet, is the best solution for those whose rights have been violated. However, once supposedly deleted from the website the information is still shown in search engines. In this context, “the right to be forgotten in the internet” is invoked. Its implementation will result in the possibility for any person to delete and stop its personal information from being spread through the internet in any way, especially through search engines directories. This way we will have a more comprehensive control over our personal information in two ways: firstly, by allowing individuals to completely delete their information from any website and cloud service and secondly by limiting access of search engines to the information. This way, it could be said that a new and catchier term has been found for an “old” right.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Creating an innovative tool that takes advantage of digital interconnectivity between shipping agencies and husbandry services suppliers was the starting point. But the main purpose of this paper is to figure out if that represents a business opportunity. It describes the preliminary stages undertaken, as the connections with the main potential providers of the husbandry services. This was carried out as a qualitative research, based on interviews given by shipping agencies that contributed as a source of data about their activities but also to survey their acceptance of the concept that could change the way of doing business in this area.At the same time, inquiries have been made to build financial scenarios that show the costs and revenue streams allocated to this project. Considering the data collected from the main players in husbandry services and the different outcomes, the feasibility of this project is assessed. Even though the paradigm was well received by all the firms contacted, the development costs turn out to be the main threat to the project so further steps are advised.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Användning av molntjänster har gjort forensiska undersökningar mer komplicerade. Däremot finns det goda förutsättningar om molnleverantörerna skapar tjänster för att få ut all information. Det skulle göra det enklare och mer tillförlitligt. Informationen som ska tas ut från molntjänsterna är svår att få ut på ett korrekt sätt. Undersökningen görs inte på en skrivskyddad kopia, utan i en miljö som riskerar att förändras. Det är då möjligt att ändringar görs under tiden datan hämtas ut, vilket inte alltid syns. Det går heller inte att jämföra skillnaderna genom att ta hashsummor på filerna som görs vid forensiska undersökningar av datorer. Därför är det viktigt att dokumentera hur informationen har tagits ut, helst genom att filma datorskärmen under tiden informationen tas ut. Informationen finns sparad på flera platser då molntjänsterna Office 365 och Google Apps används, både i molnet och på den eller de datorer som har använts för att ansluta till molntjänsten. Webbläsare sparar mycket information om vad som har gjorts. Därför är det viktigt att det går att ta reda på vilka datorer som har använts för att ansluta sig till molntjänsten, vilket idag inte möjligt. Om det är möjligt att undersöka de datorer som använts kan bevis som inte finns kvar i molnet hittas. Det bästa ur forensisk synvinkel skulle vara om leverantörerna av molntjänster erbjöd en tjänst som hämtar ut all data som rör en användare, inklusive alla relevanta loggar. Då skulle det ske på ett mycket säkrare sätt, då det inte skulle gå att ändra informationen under tiden den hämtas ut. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Attribute-based signature (ABS) enables users to sign messages over attributes without revealing any information other than the fact that they have attested to the messages. However, heavy computational cost is required during signing in existing work of ABS, which grows linearly with the size of the predicate formula. As a result, this presents a significant challenge for resource-constrained devices (such as mobile devices or RFID tags) to perform such heavy computations independently. Aiming at tackling the challenge above, we first propose and formalize a new paradigm called Outsourced ABS, i.e., OABS, in which the computational overhead at user side is greatly reduced through outsourcing intensive computations to an untrusted signing-cloud service provider (S-CSP). Furthermore, we apply this novel paradigm to existing ABS schemes to reduce the complexity. As a result, we present two concrete OABS schemes: i) in the first OABS scheme, the number of exponentiations involving in signing is reduced from O(d) to O(1) (nearly three), where d is the upper bound of threshold value defined in the predicate; ii) our second scheme is built on Herranz et al.'s construction with constant-size signatures. The number of exponentiations in signing is reduced from O(d2) to O(d) and the communication overhead is O(1). Security analysis demonstrates that both OABS schemes are secure in terms of the unforgeability and attribute-signer privacy definitions specified in the proposed security model. Finally, to allow for high efficiency and flexibility, we discuss extensions of OABS and show how to achieve accountability as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the explosion of big data, processing large numbers of continuous data streams, i.e., big data stream processing (BDSP), has become a crucial requirement for many scientific and industrial applications in recent years. By offering a pool of computation, communication and storage resources, public clouds, like Amazon's EC2, are undoubtedly the most efficient platforms to meet the ever-growing needs of BDSP. Public cloud service providers usually operate a number of geo-distributed datacenters across the globe. Different datacenter pairs are with different inter-datacenter network costs charged by Internet Service Providers (ISPs). While, inter-datacenter traffic in BDSP constitutes a large portion of a cloud provider's traffic demand over the Internet and incurs substantial communication cost, which may even become the dominant operational expenditure factor. As the datacenter resources are provided in a virtualized way, the virtual machines (VMs) for stream processing tasks can be freely deployed onto any datacenters, provided that the Service Level Agreement (SLA, e.g., quality-of-information) is obeyed. This raises the opportunity, but also a challenge, to explore the inter-datacenter network cost diversities to optimize both VM placement and load balancing towards network cost minimization with guaranteed SLA. In this paper, we first propose a general modeling framework that describes all representative inter-task relationship semantics in BDSP. Based on our novel framework, we then formulate the communication cost minimization problem for BDSP into a mixed-integer linear programming (MILP) problem and prove it to be NP-hard. We then propose a computation-efficient solution based on MILP. The high efficiency of our proposal is validated by extensive simulation based studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-tenancy is a cloud computing phenomenon. Multiple instances of an application occupy and share resources from a large pool, allowing different users to have their own version of the same application running and coexisting on the same hardware but in isolated virtual spaces. In this position paper we survey the current landscape of multi-tenancy, laying out the challenges and complexity of software engineering where multi-tenancy issues are involved. Multitenancy allows cloud service providers to better utilise computing resources, supporting the development of more exible services to customers based on economy of scale, reducing overheads and infrastructural costs. Nevertheless, there are major challenges in migration from single tenant applications to multi-tenancy. These have not been fully explored in research or practice to date. In particular, the reengineering effort of multi-tenancy in Software-as-a-Service cloud applications requires many complex and important aspects that should be taken into consideration, such as security, scalability, scheduling, data isolation, etc. Our study emphasizes scheduling policies and cloud provisioning and deployment with regards to multi-tenancy issues. We employ CloudSim and MapReduce in our experiments to simulate and analyse multi-tenancy models, scenarios, performance, scalability, scheduling and reliability on cloud platforms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As clouds have been deployed widely in various fields, the reliability and availability of clouds become the major concern of cloud service providers and users. Thereby, fault tolerance in clouds receives a great deal of attention in both industry and academia, especially for real-time applications due to their safety critical nature. Large amounts of researches have been conducted to realize fault tolerance in distributed systems, among which fault-tolerant scheduling plays a significant role. However, few researches on the fault-tolerant scheduling study the virtualization and the elasticity, two key features of clouds, sufficiently. To address this issue, this paper presents a fault-tolerant mechanism which extends the primary-backup model to incorporate the features of clouds. Meanwhile, for the first time, we propose an elastic resource provisioning mechanism in the fault-tolerant context to improve the resource utilization. On the basis of the fault-tolerant mechanism and the elastic resource provisioning mechanism, we design novel fault-tolerant elastic scheduling algorithms for real-time tasks in clouds named FESTAL, aiming at achieving both fault tolerance and high resource utilization in clouds. Extensive experiments injecting with random synthetic workloads as well as the workload from the latest version of the Google cloud tracelogs are conducted by CloudSim to compare FESTAL with three baseline algorithms, i.e., Non-M igration-FESTAL (NMFESTAL), Non-Overlapping-FESTAL (NOFESTAL), and Elastic First Fit (EFF). The experimental results demonstrate that FESTAL is able to effectively enhance the performance of virtualized clouds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O mercado brasileiro de Telecomunicações e Tecnologia da Informação (TIC) tem importância significativa para o desenvolvimento do Brasil, haja vista a evolução do mercado de telefonia móvel, que cresceu 600% nos últimos dez anos. A indústria de telecomunicações, que representa 4,7 % do PIB brasileiro (TELEBRASIL, 2013), passou a ter uma nova dinâmica a partir da elaboração da Lei Geral de Telecomunicações em 1997 e, posteriormente, com a privatização do setor. Esta rápida transformação da cadeia de valor do setor foi também impulsionada pela evolução das tecnologias e de novas arquiteturas de redes. Ademais, a utilização de tecnologias digitais, como aplicativos/APPs e a própria internet, tornou a cadeia de telecomunicações mais complexa, possibilitando o surgimento de novos atores e o desenvolvimento de novos serviços, modelos de negócios e precificação (SCHAPIRO e VARIAN, 2003). Este estudo tem como objetivo analisar os direcionadores e barreiras na adoção de novos modelos de precificação de serviços no mercado brasileiro de telecomunicações, considerando a transformação e evolução do setor. O estudo foi elaborado por meio de uma estratégia de pesquisa qualitativo-exploratória e construtivista baseando-se na abordagem Multinível (POZZEBON e DINIZ, 2012), que trabalha o contexto, o processo e as interações entre os grupos sociais relevantes. A partir desta análise, foi possível compreender os critérios, direcionadores e barreiras no processo de adoção de novos modelos de precificação, as quais destacam-se as demandas dos usuários, a alta concorrência e a necessidade de aumento do retorno do investimento como os direcionadores mais relevantes, enquanto que a qualidade das redes, a falta de sistemas, a situação financeira das operadoras, a complexidade da regulamentação e o surgimento de grupos sociais distintos dentro da empresa são apontados como as barreiras mais críticas neste processo. Dentro deste contexto, os modelos de precificação emergentes abrangem o empacotamento de serviços, ofertas por tempo limitado, modelos de patrocínio/gratuidade, em conjunto com exploração de novas áreas de negócios. Este estudo proporciona uma contribuição prática e acadêmica na medida em que permite uma melhor compreensão do dinamismo do mercado e suporte para as áreas de marketing estratégico e tático das operadoras, bem como na formulação de políticas e regulamentação do setor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Agile methodologies have become the standard approach to software development. The most popular and used one is Scrum. Scrum is a very simple and flexible framework that respond to unpredictability in a really effective way. However, his implementation must be correct, and since Scrum tells you what to do but not how to do it, this is not trivial. In this thesis I will describe the Scrum Framework, how to implement it and a tool that can help to do this. The thesis is divided into three parts. The first part is called Scrum. Here I will introduce the framework itself, its key concepts and its components. In Scrum there are three components: roles, meetings and artifacts. Each of these is meant to accomplish a series of specific tasks. After describing the “what to do”, in the second part, Best Practices, I will focus on the “how to do it”. For example, how to decide which items should be included in the next sprint, how to estimate tasks, and how should the team workspace be. Finally, in the third part called Tools, I will introduce Visual Studio Online, a cloud service from Microsoft that offers Git and TFVC repositories and the opportunity to manage projects with Scrum. == Versione italiana: I metodi Agile sono diventati l’approccio standard per lo sviluppo di software. Il più famoso ed utilizzato è Scrum. Scrum è un framework molto semplice e flessibile che risponde ai cambiamenti in una maniera molto efficace. La sua implementazione deve però essere corretta, e visto che Scrum ci dice cosa fare ma non come farlo, questo non risulta essere immediato. In questa tesi descriverò Scrum, come implementarlo ed uno strumento che ci può aiutare a farlo. La tesi è divisa in tre parti. La prima parte è chiamata Scrum. Qui introdurrò il framework, i suoi concetti base e le sue componenti. In Scrum ci sono tre componenti: i ruoli, i meeting e gli artifact. Ognuno di questi è studiato per svolgere una serie di compiti specifici. Dopo aver descritto il “cosa fare”, nella seconda parte, Best Practices, mi concentrerò sul “come farlo”. Ad esempio, come decidere quali oggetti includere nella prossima sprint, come stimare ogni task e come dovrebbe essere il luogo di lavoro del team. Infine, nella terza parte chiamata Tools, introdurrò Visual Studio Online, un servizio cloud della Microsoft che offre repository Git e TFVC e l’opportunità di gestire un progetto con Scrum.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Virtualized Infrastructures are a promising way for providing flexible and dynamic computing solutions for resourceconsuming tasks. Scientific Workflows are one of these kind of tasks, as they need a large amount of computational resources during certain periods of time. To provide the best infrastructure configuration for a workflow it is necessary to explore as many providers as possible taking into account different criteria like Quality of Service, pricing, response time, network latency, etc. Moreover, each one of these new resources must be tuned to provide the tools and dependencies required by each of the steps of the workflow. Working with different infrastructure providers, either public or private using their own concepts and terms, and with a set of heterogeneous applications requires a framework for integrating all the information about these elements. This work proposes semantic technologies for describing and integrating all the information about the different components of the overall system and a set of policies created by the user. Based on this information a scheduling process will be performed to generate an infrastructure configuration defining the set of virtual machines that must be run and the tools that must be deployed on them.