742 resultados para Cloud Computing Modelli di Business
Resumo:
Nos últimos anos o aumento exponencial da utilização de dispositivos móveis e serviços disponibilizados na “Cloud” levou a que a forma como os sistemas são desenhados e implementados mudasse, numa perspectiva de tentar alcançar requisitos que até então não eram essenciais. Analisando esta evolução, com o enorme aumento dos dispositivos móveis, como os “smartphones” e “tablets” fez com que o desenho e implementação de sistemas distribuidos fossem ainda mais importantes nesta área, na tentativa de promover sistemas e aplicações que fossem mais flexíveis, robutos, escaláveis e acima de tudo interoperáveis. A menor capacidade de processamento ou armazenamento destes dispositivos tornou essencial o aparecimento e crescimento de tecnologias que prometem solucionar muitos dos problemas identificados. O aparecimento do conceito de Middleware visa solucionar estas lacunas nos sistemas distribuidos mais evoluídos, promovendo uma solução a nível de organização e desenho da arquitetura dos sistemas, ao memo tempo que fornece comunicações extremamente rápidas, seguras e de confiança. Uma arquitetura baseada em Middleware visa dotar os sistemas de um canal de comunicação que fornece uma forte interoperabilidade, escalabilidade, e segurança na troca de mensagens, entre outras vantagens. Nesta tese vários tipos e exemplos de sistemas distribuídos e são descritos e analisados, assim como uma descrição em detalhe de três protocolos (XMPP, AMQP e DDS) de comunicação, sendo dois deles (XMPP e AMQP) utilzados em projecto reais que serão descritos ao longo desta tese. O principal objetivo da escrita desta tese é demonstrar o estudo e o levantamento do estado da arte relativamente ao conceito de Middleware aplicado a sistemas distribuídos de larga escala, provando que a utilização de um Middleware pode facilitar e agilizar o desenho e desenvolvimento de um sistema distribuído e traz enormes vantagens num futuro próximo.
Resumo:
L’infonuage est un nouveau paradigme de services informatiques disponibles à la demande qui a connu une croissance fulgurante au cours de ces dix dernières années. Le fournisseur du modèle de déploiement public des services infonuagiques décrit le service à fournir, le prix, les pénalités en cas de violation des spécifications à travers un document. Ce document s’appelle le contrat de niveau de service (SLA). La signature de ce contrat par le client et le fournisseur scelle la garantie de la qualité de service à recevoir. Ceci impose au fournisseur de gérer efficacement ses ressources afin de respecter ses engagements. Malheureusement, la violation des spécifications du SLA se révèle courante, généralement en raison de l’incertitude sur le comportement du client qui peut produire un nombre variable de requêtes vu que les ressources lui semblent illimitées. Ce comportement peut, dans un premier temps, avoir un impact direct sur la disponibilité du service. Dans un second temps, des violations à répétition risquent d'influer sur le niveau de confiance du fournisseur et sur sa réputation à respecter ses engagements. Pour faire face à ces problèmes, nous avons proposé un cadre d’applications piloté par réseau bayésien qui permet, premièrement, de classifier les fournisseurs dans un répertoire en fonction de leur niveau de confiance. Celui-ci peut être géré par une entité tierce. Un client va choisir un fournisseur dans ce répertoire avant de commencer à négocier le SLA. Deuxièmement, nous avons développé une ontologie probabiliste basée sur un réseau bayésien à entités multiples pouvant tenir compte de l’incertitude et anticiper les violations par inférence. Cette ontologie permet de faire des prédictions afin de prévenir des violations en se basant sur les données historiques comme base de connaissances. Les résultats obtenus montrent l’efficacité de l’ontologie probabiliste pour la prédiction de violation dans l’ensemble des paramètres SLA appliqués dans un environnement infonuagique.
Resumo:
Animation on Green IT Container EdShare to be used to assemble resources developed by team for coursework 2. The metadata will be updated by the group, and description modified when the resource set has been completed. The resource has been set up with an instructional readme file. This will be replaced by each group with a new readme file. Each group needs to update this description. Further instructions in readme.txt
Resumo:
These slides give the instructuions for completing the COMP1214 team project on cloud and mobile computing
Resumo:
En la segunda d??cada del siglo XXI se asiste a una completa transformaci??n en el proceso de ense??anza-aprendizaje de las lenguas extranjeras en general, y del espa??ol, en particular. En este nuevo contexto, las nuevas tecnolog??as aplicadas al ??mbito educativo juegan un papel muy importante. Este universo es conocido como la Web 2.0. Se presentan las ventajas e inconvenientes de la utilizaci??n de estas tecnolog??as en la clase de idiomas centr??ndose en cuatro herramientas digitales que ofrecen m??s posibilidades para dinamizar las clases de espa??ol: los blogs, los wikis, el podcasting y Google Docs o el cloud computing.
Resumo:
This Policy Contribution assesses the broad obstacles hampering ICT-led growth in Europe and identifies the main areas in which policy could unlock the greatest value. We review estimates of the value that could be generated through take-up of various technologies and carry out a broad matching with policy areas. According to the literature survey and the collected estimates, the areas in which the right policies could unlock the greatest ICT-led growth are product and labour market regulations and the European Single Market. These areas should be reformed to make European markets more flexible and competitive. This would promote wider adoption of modern data-driven organisational and management practices thereby helping to close the productivity gap between the United States and the European Union. Gains could also be made in the areas of privacy, data security, intellectual property and liability pertaining to the digital economy, especially cloud computing, and next generation network infrastructure investment. Standardisation and spectrum allocation issues are found to be important, though to a lesser degree. Strong complementarities between the analysed technologies suggest, however, that policymakers need to deal with all of the identified obstacles in order to fully realise the potential of ICT to spur long-term growth beyond the partial gains that we report.
Resumo:
A full assessment of para-virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-‐metal, as well as on para-‐virtualization. The idea is to see what the overheads of para-‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-‐metal, then on the para-‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-‐native performance. You can deploy both para-‐virtualization and full virtualization across various virtualized systems. Para-‐virtualization is an OS-‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.
Resumo:
This paper introduces an architecture for identifying and modelling in real-time at a copper mine using new technologies as M2M and cloud computing with a server in the cloud and an Android client inside the mine. The proposed design brings up pervasive mining, a system with wider coverage, higher communication efficiency, better fault-tolerance, and anytime anywhere availability. This solution was designed for a plant inside the mine which cannot tolerate interruption and for which their identification in situ, in real time, is an essential part of the system to control aspects such as instability by adjusting their corresponding parameters without stopping the process.
Resumo:
With the increase in e-commerce and the digitisation of design data and information,the construction sector has become reliant upon IT infrastructure and systems. The design and production process is more complex, more interconnected, and reliant upon greater information mobility, with seamless exchange of data and information in real time. Construction small and medium-sized enterprises (CSMEs), in particular,the speciality contractors, can effectively utilise cost-effective collaboration-enabling technologies, such as cloud computing, to help in the effective transfer of information and data to improve productivity. The system dynamics (SD) approach offers a perspective and tools to enable a better understanding of the dynamics of complex systems. This research focuses upon system dynamics methodology as a modelling and analysis tool in order to understand and identify the key drivers in the absorption of cloud computing for CSMEs. The aim of this paper is to determine how the use of system dynamics (SD) can improve the management of information flow through collaborative technologies leading to improved productivity. The data supporting the use of system dynamics was obtained through a pilot study consisting of questionnaires and interviews from five CSMEs in the UK house-building sector.
Resumo:
Användning av molntjänster har gjort forensiska undersökningar mer komplicerade. Däremot finns det goda förutsättningar om molnleverantörerna skapar tjänster för att få ut all information. Det skulle göra det enklare och mer tillförlitligt. Informationen som ska tas ut från molntjänsterna är svår att få ut på ett korrekt sätt. Undersökningen görs inte på en skrivskyddad kopia, utan i en miljö som riskerar att förändras. Det är då möjligt att ändringar görs under tiden datan hämtas ut, vilket inte alltid syns. Det går heller inte att jämföra skillnaderna genom att ta hashsummor på filerna som görs vid forensiska undersökningar av datorer. Därför är det viktigt att dokumentera hur informationen har tagits ut, helst genom att filma datorskärmen under tiden informationen tas ut. Informationen finns sparad på flera platser då molntjänsterna Office 365 och Google Apps används, både i molnet och på den eller de datorer som har använts för att ansluta till molntjänsten. Webbläsare sparar mycket information om vad som har gjorts. Därför är det viktigt att det går att ta reda på vilka datorer som har använts för att ansluta sig till molntjänsten, vilket idag inte möjligt. Om det är möjligt att undersöka de datorer som använts kan bevis som inte finns kvar i molnet hittas. Det bästa ur forensisk synvinkel skulle vara om leverantörerna av molntjänster erbjöd en tjänst som hämtar ut all data som rör en användare, inklusive alla relevanta loggar. Då skulle det ske på ett mycket säkrare sätt, då det inte skulle gå att ändra informationen under tiden den hämtas ut.
Resumo:
Molntjänster har blivit ett intressant fenomen i IT-världen. Molntjänster har skapat möjligheter för företag och individer att effektivisera sin verksamhet för en minimal avgift istället för att driftsätta egna servrar. Detta blir möjligt genom att erbjuda flera olika tjänster på varierande distributionsmodeller. Till följd av detta fenomen förekommer serviceförfrågningar av molntjänster kontinuerligt bland svenska privata företag och myndigheter. De privata företagen har ingen skyldighet att följa lagar som begränsar dem från att använda molntjänster, i motsats till krisberedskapsmyndigheterna och deras utvecklings- och testverksamhet. Detta examensarbete kommer fokusera på att analysera de möjligheter som finns för svenska krisberedskapsmyndigheters och deras utvecklings- och testverksamheter att använda molntjänster Examensarbetet genomfördes som en kvalitativ studie med hjälp av intervjuer och litteraturstudier som datainsamlingsmetoder. Intervjuerna genomfördes på anställda i en krisberedskapsmyndighet för att ge en bild av hur dessa anställda med varierande befattningar tolkar molntjänster samt dess för- och nackdelar. Litteraturstudien användes för att spegla andra nationers synpunkter på molntjänster i myndigheter, samt vilka svenska lagar och regelverk som kan förhindra molntjänster i en krisberedskapsmyndighet. Resultatet av examensarbetet visade att det existerar möjligheter för användning av molntjänster i en krisberedskapsmyndighet. Detta görs möjligt genom att analysera informationen som skall distribueras på en molntjänst.