904 resultados para Cloud Services
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^
Resumo:
Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.
Resumo:
Multi-Cloud Applications are composed of services offered by multiple cloud platforms where the user/developer has full knowledge of the use of such platforms. The use of multiple cloud platforms avoids the following problems: (i) vendor lock-in, which is dependency on the application of a certain cloud platform, which is prejudicial in the case of degradation or failure of platform services, or even price increasing on service usage; (ii) degradation or failure of the application due to fluctuations in quality of service (QoS) provided by some cloud platform, or even due to a failure of any service. In multi-cloud scenario is possible to change a service in failure or with QoS problems for an equivalent of another cloud platform. So that an application can adopt the perspective multi-cloud is necessary to create mechanisms that are able to select which cloud services/platforms should be used in accordance with the requirements determined by the programmer/user. In this context, the major challenges in terms of development of such applications include questions such as: (i) the choice of which underlying services and cloud computing platforms should be used based on the defined user requirements in terms of functionality and quality (ii) the need to continually monitor the dynamic information (such as response time, availability, price, availability), related to cloud services, in addition to the wide variety of services, and (iii) the need to adapt the application if QoS violations affect user defined requirements. This PhD thesis proposes an approach for dynamic adaptation of multi-cloud applications to be applied when a service is unavailable or when the requirements set by the user/developer point out that other available multi-cloud configuration meets more efficiently. Thus, this work proposes a strategy composed of two phases. The first phase consists of the application modeling, exploring the similarities representation capacity and variability proposals in the context of the paradigm of Software Product Lines (SPL). In this phase it is used an extended feature model to specify the cloud service configuration to be used by the application (similarities) and the different possible providers for each service (variability). Furthermore, the non-functional requirements associated with cloud services are specified by properties in this model by describing dynamic information about these services. The second phase consists of an autonomic process based on MAPE-K control loop, which is responsible for selecting, optimally, a multicloud configuration that meets the established requirements, and perform the adaptation. The adaptation strategy proposed is independent of the used programming technique for performing the adaptation. In this work we implement the adaptation strategy using various programming techniques such as aspect-oriented programming, context-oriented programming and components and services oriented programming. Based on the proposed steps, we tried to assess the following: (i) the process of modeling and the specification of non-functional requirements can ensure effective monitoring of user satisfaction; (ii) if the optimal selection process presents significant gains compared to sequential approach; and (iii) which techniques have the best trade-off when compared efforts to development/modularity and performance.
Resumo:
Mobile Cloud Computing promises to overcome the physical limitations of mobile devices by executing demanding mobile applications on cloud infrastructure. In practice, implementing this paradigm is difficult; network disconnection often occurs, bandwidth may be limited, and a large power draw is required from the battery, resulting in a poor user experience. This thesis presents a mobile cloud middleware solution, Context Aware Mobile Cloud Services (CAMCS), which provides cloudbased services to mobile devices, in a disconnected fashion. An integrated user experience is delivered by designing for anticipated network disconnection, and low data transfer requirements. CAMCS achieves this by means of the Cloud Personal Assistant (CPA); each user of CAMCS is assigned their own CPA, which can complete user-assigned tasks, received as descriptions from the mobile device, by using existing cloud services. Service execution is personalised to the user's situation with contextual data, and task execution results are stored with the CPA until the user can connect with his/her mobile device to obtain the results. Requirements for an integrated user experience are outlined, along with the design and implementation of CAMCS. The operation of CAMCS and CPAs with cloud-based services is presented, specifically in terms of service description, discovery, and task execution. The use of contextual awareness to personalise service discovery and service consumption to the user's situation is also presented. Resource management by CAMCS is also studied, and compared with existing solutions. Additional application models that can be provided by CAMCS are also presented. Evaluation is performed with CAMCS deployed on the Amazon EC2 cloud. The resource usage of the CAMCS Client, running on Android-based mobile devices, is also evaluated. A user study with volunteers using CAMCS on their own mobile devices is also presented. Results show that CAMCS meets the requirements outlined for an integrated user experience.
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.
Resumo:
Part 18: Optimization in Collaborative Networks
Resumo:
Cloud services are exploding, and organizations are converging their data centers in order to take advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In parallel, energy-efficient and high-security networking is of increasing importance. Network operators, and service and product providers require a new network solution to efficiently tackle the increasing demands of this changing network landscape. Software-defined networking has emerged as an efficient network technology capable of supporting the dynamic nature of future network functions and intelligent applications while lowering operating costs through simplified hardware, software, and management. In this article, the question of how to achieve a successful carrier grade network with software-defined networking is raised. Specific focus is placed on the challenges of network performance, scalability, security, and interoperability with the proposal of potential solution directions.
Resumo:
Thesis (Master's)--University of Washington, 2012
Resumo:
With the recent technological development, we have been witnessing a progressive loss of control over our personal information. Whether it is the speed in which it spreads over the internet or the permanent storage of information on cloud services, the means by which our personal information escapes our control are vast. Inevitably, this situation allowed serious violations of personal rights. The necessity to reform the European policy for protection of personal information is emerging, in order to adapt to the technological era we live in. Granting individuals the ability to delete their personal information, mainly the information which is available on the Internet, is the best solution for those whose rights have been violated. However, once supposedly deleted from the website the information is still shown in search engines. In this context, “the right to be forgotten in the internet” is invoked. Its implementation will result in the possibility for any person to delete and stop its personal information from being spread through the internet in any way, especially through search engines directories. This way we will have a more comprehensive control over our personal information in two ways: firstly, by allowing individuals to completely delete their information from any website and cloud service and secondly by limiting access of search engines to the information. This way, it could be said that a new and catchier term has been found for an “old” right.
Resumo:
Tutkimuksen tavoitteena oli selvittää, millä tavoin viestintäteknologian kehitys, globalisaatio sekä organisaatiomuodot ovat suhteessa keskenään sekä miten ne vaikuttavat esimiestyöhön hajautuneessa organisaatiossa. Tutkimuskohteena olivat kymmenen eri asiantuntijaorganisaatiossa esimiesasemassa työskentelevää henkilöä. Tutkimus toteutettiin laadullisin tutkimusmenetelmin. Tutkimusaineisto hankittiin puolistrukturoitujen teemahaastattelujen sekä avoimien haastatteluiden avulla. Tutkimuksen teoreettinen viitekehys rakennettiin hajautuneen organisaation määrittelyn ohella johtamisteorioista sekä organisaation viestinnän teorioista. Tutkimuksen tulosten perusteella hajautuneet organisaatiomuodot yleistyvät uusille toimialoille ja työaika ja –paikka käsitteet muuttuvat. Esimiestyö on sosiaalinen prosessi. Vuorovaikutteinen ja päätösvaltaa jakava esimiestyö on tehokkaan ja joustavan organisaation edellytys. Verkostoituminen on yleistynyt ja on organisaatioiden kannalta järkevämpi tapa saada tarvittava osaaminen käyttöön kuin rekrytoiminen. Tutkimuksen mukaan organisaatiot pyrkivät edesauttamaan verkostoitumista mahdollistamalla työajasta ja –paikasta riippumattoman työn tekemisen. Useimmat organisatioissa käyettävät sovellukset ja järjestelmät toimivat tietoliikenneverkon kautta tai pilvipalveluissa. Työntekijöitä kannustetaan voimakkaasti muodostamaan sekä löytämään omaa ja organisaation osaamista tukevia osaamiskeskuksia. Lisäksi tutkimus vahvistaa aiempien tutkimustulosten mukaisesti, että nykymuotoinen esimiestyö ei toimi työaikaja työpaikkarajoituksista vapautuneessa, hajautuneissa organisaatioissa. Tämän vaikutuksesta myös alaisten rooli muuttuu hajautuneissa organisaatioissa.
Resumo:
La formation est une stratégie clé pour le développement des compétences. Les entreprises continuent à investir dans la formation et le développement, mais elles possèdent rarement des données pour évaluer les résultats de cet investissement. La plupart des entreprises utilisent le modèle Kirkpatrick/Phillips pour évaluer la formation en entreprise. Cependant, il ressort de la littérature que les entreprises ont des difficultés à utiliser ce modèle. Les principales barrières sont la difficulté d’isoler l’apprentissage comme un facteur qui a une incidence sur les résultats, l’absence d’un système d’évaluation utile avec le système de gestion de l’apprentissage (Learning Management System - LMS) et le manque de données standardisées pour pouvoir comparer différentes fonctions d’apprentissage. Dans cette thèse, nous proposons un modèle (Analyse, Modélisation, Monitoring et Optimisation - AM2O) de gestion de projets de formation en entreprise, basée sur la gestion des processus d’affaires (Business Process Management - BPM). Un tel scénario suppose que les activités de formation en entreprise doivent être considérées comme des processus d’affaires. Notre modèle est inspiré de cette méthode (BPM), à travers la définition et le suivi des indicateurs de performance pour gérer les projets de formation dans les organisations. Elle est basée sur l’analyse et la modélisation des besoins de formation pour assurer l’alignement entre les activités de formation et les objectifs d’affaires de l’entreprise. Elle permet le suivi des projets de formation ainsi que le calcul des avantages tangibles et intangibles de la formation (sans coût supplémentaire). En outre, elle permet la production d’une classification des projets de formation en fonction de critères relatifs à l’entreprise. Ainsi, avec assez de données, notre approche peut être utilisée pour optimiser le rendement de la formation par une série de simulations utilisant des algorithmes d’apprentissage machine : régression logistique, réseau de neurones, co-apprentissage. Enfin, nous avons conçu un système informatique, Enterprise TRaining programs Evaluation and Optimization System - ETREOSys, pour la gestion des programmes de formation en entreprise et l’aide à la décision. ETREOSys est une plateforme Web utilisant des services en nuage (cloud services) et les bases de données NoSQL. A travers AM2O et ETREOSys nous résolvons les principaux problèmes liés à la gestion et l’évaluation de la formation en entreprise à savoir : la difficulté d’isoler les effets de la formation dans les résultats de l’entreprise et le manque de systèmes informatiques.
Resumo:
Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations
Resumo:
Användning av molntjänster har gjort forensiska undersökningar mer komplicerade. Däremot finns det goda förutsättningar om molnleverantörerna skapar tjänster för att få ut all information. Det skulle göra det enklare och mer tillförlitligt. Informationen som ska tas ut från molntjänsterna är svår att få ut på ett korrekt sätt. Undersökningen görs inte på en skrivskyddad kopia, utan i en miljö som riskerar att förändras. Det är då möjligt att ändringar görs under tiden datan hämtas ut, vilket inte alltid syns. Det går heller inte att jämföra skillnaderna genom att ta hashsummor på filerna som görs vid forensiska undersökningar av datorer. Därför är det viktigt att dokumentera hur informationen har tagits ut, helst genom att filma datorskärmen under tiden informationen tas ut. Informationen finns sparad på flera platser då molntjänsterna Office 365 och Google Apps används, både i molnet och på den eller de datorer som har använts för att ansluta till molntjänsten. Webbläsare sparar mycket information om vad som har gjorts. Därför är det viktigt att det går att ta reda på vilka datorer som har använts för att ansluta sig till molntjänsten, vilket idag inte möjligt. Om det är möjligt att undersöka de datorer som använts kan bevis som inte finns kvar i molnet hittas. Det bästa ur forensisk synvinkel skulle vara om leverantörerna av molntjänster erbjöd en tjänst som hämtar ut all data som rör en användare, inklusive alla relevanta loggar. Då skulle det ske på ett mycket säkrare sätt, då det inte skulle gå att ändra informationen under tiden den hämtas ut.
Resumo:
Cloud computing innebär att datorkraft och IT-resurser i form av servrar, lagring och lokala nätverk görs tillgängliga via internet, cloud computing har ökat snabbt i användning de senaste åren. Möjligheterna med molntjänster har lett till nya utmaningar avseende tillit till molntjänster genom att öppna för helt nya förhållanden mellan leverantör och kund. I dagsläget så är tillit till molntjänsterna ett stort problem för kunder såsom mikroföretag samt små och medelstora företag. Vårt examensarbete syftar till att beskriva mikroföretags samt små och medelstora företags tillit och misstro till molntjänster, samt identifiera vilka kriterier som bidrar till detta. Resultatet visar att det är svårt att tydligt definiera tillit och misstro hos företag, då deras situationer och verksamheter skiljer sig. Tack vare vår litteraturstudie i kombination med våra intervjuer har vi skapat oss en bra bild av vilka kriterier som bidrar med tillit och misstro till molntjänster. Baserat på vårt arbete har vi sammanställt en konceptuell modell som beskriver tillit och misstro till molntjänster.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.